00:00:00.002 Started by upstream project "autotest-per-patch" build number 122878 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.087 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.113 Fetching changes from the remote Git repository 00:00:00.114 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.148 Using shallow fetch with depth 1 00:00:00.148 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.148 > git --version # timeout=10 00:00:00.177 > git --version # 'git version 2.39.2' 00:00:00.177 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.178 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.178 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.588 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.600 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.612 Checking out Revision c7986954d8037b9c61764d44ed2af24625b251c6 (FETCH_HEAD) 00:00:05.612 > git config core.sparsecheckout # timeout=10 00:00:05.623 > git read-tree -mu HEAD # timeout=10 00:00:05.638 > git checkout -f c7986954d8037b9c61764d44ed2af24625b251c6 # timeout=5 00:00:05.664 Commit message: "inventory/dev: add missing long names" 00:00:05.664 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:05.769 [Pipeline] Start of Pipeline 00:00:05.785 [Pipeline] library 00:00:05.787 Loading library shm_lib@master 00:00:05.787 Library shm_lib@master is cached. Copying from home. 00:00:05.805 [Pipeline] node 00:00:05.827 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.829 [Pipeline] { 00:00:05.838 [Pipeline] catchError 00:00:05.839 [Pipeline] { 00:00:05.864 [Pipeline] wrap 00:00:05.871 [Pipeline] { 00:00:05.879 [Pipeline] stage 00:00:05.881 [Pipeline] { (Prologue) 00:00:06.071 [Pipeline] sh 00:00:06.496 + logger -p user.info -t JENKINS-CI 00:00:06.516 [Pipeline] echo 00:00:06.517 Node: GP11 00:00:06.522 [Pipeline] sh 00:00:06.814 [Pipeline] setCustomBuildProperty 00:00:06.825 [Pipeline] echo 00:00:06.827 Cleanup processes 00:00:06.830 [Pipeline] sh 00:00:07.117 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.117 2578888 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.129 [Pipeline] sh 00:00:07.408 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.409 ++ grep -v 'sudo pgrep' 00:00:07.409 ++ awk '{print $1}' 00:00:07.409 + sudo kill -9 00:00:07.409 + true 00:00:07.421 [Pipeline] cleanWs 00:00:07.432 [WS-CLEANUP] Deleting project workspace... 00:00:07.432 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.442 [WS-CLEANUP] done 00:00:07.445 [Pipeline] setCustomBuildProperty 00:00:07.457 [Pipeline] sh 00:00:07.742 + sudo git config --global --replace-all safe.directory '*' 00:00:07.818 [Pipeline] nodesByLabel 00:00:07.819 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.827 [Pipeline] httpRequest 00:00:07.831 HttpMethod: GET 00:00:07.832 URL: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.836 Sending request to url: http://10.211.164.101/packages/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:07.839 Response Code: HTTP/1.1 200 OK 00:00:07.840 Success: Status code 200 is in the accepted range: 200,404 00:00:07.841 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.247 [Pipeline] sh 00:00:08.528 + tar --no-same-owner -xf jbp_c7986954d8037b9c61764d44ed2af24625b251c6.tar.gz 00:00:08.547 [Pipeline] httpRequest 00:00:08.552 HttpMethod: GET 00:00:08.553 URL: http://10.211.164.101/packages/spdk_08ee631f2287f76d54d98b6c2c35fd15767d0fbe.tar.gz 00:00:08.554 Sending request to url: http://10.211.164.101/packages/spdk_08ee631f2287f76d54d98b6c2c35fd15767d0fbe.tar.gz 00:00:08.558 Response Code: HTTP/1.1 200 OK 00:00:08.558 Success: Status code 200 is in the accepted range: 200,404 00:00:08.559 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_08ee631f2287f76d54d98b6c2c35fd15767d0fbe.tar.gz 00:00:18.660 [Pipeline] sh 00:00:18.938 + tar --no-same-owner -xf spdk_08ee631f2287f76d54d98b6c2c35fd15767d0fbe.tar.gz 00:00:22.229 [Pipeline] sh 00:00:22.530 + git -C spdk log --oneline -n5 00:00:22.530 08ee631f2 [TEST] autotest: collect nvmf coverage 00:00:22.530 3cdbb5383 test: avoid URING sock coverage degradation 00:00:22.530 9e0643d4a sock: add default impl override 00:00:22.530 bff75b6cb sock: check if impl is registered 00:00:22.530 fe2f92165 sock: replace sock impl priorities 00:00:22.541 [Pipeline] } 00:00:22.556 [Pipeline] // stage 00:00:22.564 [Pipeline] stage 00:00:22.566 [Pipeline] { (Prepare) 00:00:22.583 [Pipeline] writeFile 00:00:22.599 [Pipeline] sh 00:00:22.874 + logger -p user.info -t JENKINS-CI 00:00:22.887 [Pipeline] sh 00:00:23.163 + logger -p user.info -t JENKINS-CI 00:00:23.173 [Pipeline] sh 00:00:23.449 + cat autorun-spdk.conf 00:00:23.449 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:23.449 SPDK_TEST_NVMF=1 00:00:23.449 SPDK_TEST_NVME_CLI=1 00:00:23.449 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:23.449 SPDK_TEST_NVMF_NICS=e810 00:00:23.449 SPDK_TEST_VFIOUSER=1 00:00:23.449 SPDK_RUN_UBSAN=1 00:00:23.449 NET_TYPE=phy 00:00:23.456 RUN_NIGHTLY=0 00:00:23.461 [Pipeline] readFile 00:00:23.484 [Pipeline] withEnv 00:00:23.486 [Pipeline] { 00:00:23.499 [Pipeline] sh 00:00:23.777 + set -ex 00:00:23.777 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:23.777 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:23.777 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:23.777 ++ SPDK_TEST_NVMF=1 00:00:23.777 ++ SPDK_TEST_NVME_CLI=1 00:00:23.777 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:23.777 ++ SPDK_TEST_NVMF_NICS=e810 00:00:23.777 ++ SPDK_TEST_VFIOUSER=1 00:00:23.777 ++ SPDK_RUN_UBSAN=1 00:00:23.777 ++ NET_TYPE=phy 00:00:23.777 ++ RUN_NIGHTLY=0 00:00:23.777 + case $SPDK_TEST_NVMF_NICS in 00:00:23.777 + DRIVERS=ice 00:00:23.777 + [[ tcp == \r\d\m\a ]] 00:00:23.777 + [[ -n ice ]] 00:00:23.777 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:23.777 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:23.777 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:23.777 rmmod: ERROR: Module irdma is not currently loaded 00:00:23.777 rmmod: ERROR: Module i40iw is not currently loaded 00:00:23.777 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:23.777 + true 00:00:23.777 + for D in $DRIVERS 00:00:23.777 + sudo modprobe ice 00:00:23.777 + exit 0 00:00:23.786 [Pipeline] } 00:00:23.805 [Pipeline] // withEnv 00:00:23.811 [Pipeline] } 00:00:23.830 [Pipeline] // stage 00:00:23.839 [Pipeline] catchError 00:00:23.841 [Pipeline] { 00:00:23.856 [Pipeline] timeout 00:00:23.856 Timeout set to expire in 40 min 00:00:23.858 [Pipeline] { 00:00:23.873 [Pipeline] stage 00:00:23.875 [Pipeline] { (Tests) 00:00:23.891 [Pipeline] sh 00:00:24.173 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:24.173 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:24.173 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:24.173 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:24.173 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:24.173 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:24.173 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:24.173 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:24.173 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:24.173 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:24.173 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:24.173 + source /etc/os-release 00:00:24.173 ++ NAME='Fedora Linux' 00:00:24.173 ++ VERSION='38 (Cloud Edition)' 00:00:24.173 ++ ID=fedora 00:00:24.173 ++ VERSION_ID=38 00:00:24.173 ++ VERSION_CODENAME= 00:00:24.173 ++ PLATFORM_ID=platform:f38 00:00:24.173 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:24.173 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:24.173 ++ LOGO=fedora-logo-icon 00:00:24.173 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:24.173 ++ HOME_URL=https://fedoraproject.org/ 00:00:24.173 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:24.173 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:24.173 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:24.173 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:24.173 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:24.173 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:24.173 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:24.173 ++ SUPPORT_END=2024-05-14 00:00:24.173 ++ VARIANT='Cloud Edition' 00:00:24.173 ++ VARIANT_ID=cloud 00:00:24.173 + uname -a 00:00:24.173 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:24.173 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:25.546 Hugepages 00:00:25.546 node hugesize free / total 00:00:25.546 node0 1048576kB 0 / 0 00:00:25.546 node0 2048kB 0 / 0 00:00:25.546 node1 1048576kB 0 / 0 00:00:25.546 node1 2048kB 0 / 0 00:00:25.546 00:00:25.546 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:25.546 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:00:25.546 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:00:25.546 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:00:25.546 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:00:25.546 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:00:25.546 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:00:25.546 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:00:25.546 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:00:25.546 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:00:25.546 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:00:25.546 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:00:25.546 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:00:25.546 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:00:25.546 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:00:25.546 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:00:25.546 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:00:25.546 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:25.546 + rm -f /tmp/spdk-ld-path 00:00:25.546 + source autorun-spdk.conf 00:00:25.546 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.546 ++ SPDK_TEST_NVMF=1 00:00:25.546 ++ SPDK_TEST_NVME_CLI=1 00:00:25.546 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:25.546 ++ SPDK_TEST_NVMF_NICS=e810 00:00:25.546 ++ SPDK_TEST_VFIOUSER=1 00:00:25.546 ++ SPDK_RUN_UBSAN=1 00:00:25.546 ++ NET_TYPE=phy 00:00:25.546 ++ RUN_NIGHTLY=0 00:00:25.546 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:25.546 + [[ -n '' ]] 00:00:25.546 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:25.546 + for M in /var/spdk/build-*-manifest.txt 00:00:25.546 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:25.546 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:25.546 + for M in /var/spdk/build-*-manifest.txt 00:00:25.546 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:25.546 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:25.546 ++ uname 00:00:25.546 + [[ Linux == \L\i\n\u\x ]] 00:00:25.546 + sudo dmesg -T 00:00:25.546 + sudo dmesg --clear 00:00:25.546 + dmesg_pid=2579651 00:00:25.546 + [[ Fedora Linux == FreeBSD ]] 00:00:25.546 + sudo dmesg -Tw 00:00:25.546 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:25.546 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:25.546 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:25.546 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:25.546 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:25.546 + [[ -x /usr/src/fio-static/fio ]] 00:00:25.546 + export FIO_BIN=/usr/src/fio-static/fio 00:00:25.546 + FIO_BIN=/usr/src/fio-static/fio 00:00:25.546 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:25.546 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:25.546 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:25.546 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:25.546 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:25.546 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:25.546 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:25.546 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:25.546 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:25.546 Test configuration: 00:00:25.546 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.546 SPDK_TEST_NVMF=1 00:00:25.546 SPDK_TEST_NVME_CLI=1 00:00:25.546 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:25.546 SPDK_TEST_NVMF_NICS=e810 00:00:25.546 SPDK_TEST_VFIOUSER=1 00:00:25.546 SPDK_RUN_UBSAN=1 00:00:25.546 NET_TYPE=phy 00:00:25.805 RUN_NIGHTLY=0 10:40:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:25.805 10:40:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:25.805 10:40:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:25.805 10:40:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:25.805 10:40:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:25.805 10:40:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:25.805 10:40:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:25.805 10:40:41 -- paths/export.sh@5 -- $ export PATH 00:00:25.805 10:40:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:25.805 10:40:41 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:25.805 10:40:41 -- common/autobuild_common.sh@437 -- $ date +%s 00:00:25.805 10:40:41 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715762441.XXXXXX 00:00:25.805 10:40:41 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715762441.O2LMEm 00:00:25.805 10:40:41 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:00:25.805 10:40:41 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:00:25.805 10:40:41 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:25.805 10:40:41 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:25.805 10:40:41 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:25.805 10:40:41 -- common/autobuild_common.sh@453 -- $ get_config_params 00:00:25.805 10:40:41 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:00:25.805 10:40:41 -- common/autotest_common.sh@10 -- $ set +x 00:00:25.805 10:40:41 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:25.805 10:40:41 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:00:25.805 10:40:41 -- pm/common@17 -- $ local monitor 00:00:25.805 10:40:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:25.805 10:40:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:25.805 10:40:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:25.805 10:40:41 -- pm/common@21 -- $ date +%s 00:00:25.805 10:40:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:25.805 10:40:41 -- pm/common@21 -- $ date +%s 00:00:25.805 10:40:41 -- pm/common@25 -- $ sleep 1 00:00:25.805 10:40:41 -- pm/common@21 -- $ date +%s 00:00:25.805 10:40:41 -- pm/common@21 -- $ date +%s 00:00:25.805 10:40:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715762441 00:00:25.805 10:40:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715762441 00:00:25.805 10:40:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715762441 00:00:25.805 10:40:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715762441 00:00:25.805 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715762441_collect-vmstat.pm.log 00:00:25.805 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715762441_collect-cpu-load.pm.log 00:00:25.805 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715762441_collect-cpu-temp.pm.log 00:00:25.805 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715762441_collect-bmc-pm.bmc.pm.log 00:00:26.738 10:40:42 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:00:26.738 10:40:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:26.738 10:40:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:26.738 10:40:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:26.738 10:40:42 -- spdk/autobuild.sh@16 -- $ date -u 00:00:26.738 Wed May 15 08:40:42 AM UTC 2024 00:00:26.738 10:40:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:26.738 v24.05-pre-615-g08ee631f2 00:00:26.738 10:40:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:26.738 10:40:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:26.738 10:40:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:26.738 10:40:42 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:26.738 10:40:42 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:26.738 10:40:42 -- common/autotest_common.sh@10 -- $ set +x 00:00:26.738 ************************************ 00:00:26.738 START TEST ubsan 00:00:26.738 ************************************ 00:00:26.738 10:40:42 ubsan -- common/autotest_common.sh@1121 -- $ echo 'using ubsan' 00:00:26.738 using ubsan 00:00:26.738 00:00:26.738 real 0m0.000s 00:00:26.738 user 0m0.000s 00:00:26.738 sys 0m0.000s 00:00:26.738 10:40:42 ubsan -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:00:26.738 10:40:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:26.738 ************************************ 00:00:26.738 END TEST ubsan 00:00:26.738 ************************************ 00:00:26.738 10:40:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:26.738 10:40:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:26.738 10:40:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:26.738 10:40:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:26.738 10:40:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:26.738 10:40:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:26.738 10:40:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:26.738 10:40:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:26.738 10:40:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:26.996 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:26.996 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:27.254 Using 'verbs' RDMA provider 00:00:37.784 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:47.759 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:47.759 Creating mk/config.mk...done. 00:00:47.759 Creating mk/cc.flags.mk...done. 00:00:47.759 Type 'make' to build. 00:00:47.759 10:41:03 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:00:47.759 10:41:03 -- common/autotest_common.sh@1097 -- $ '[' 3 -le 1 ']' 00:00:47.759 10:41:03 -- common/autotest_common.sh@1103 -- $ xtrace_disable 00:00:47.759 10:41:03 -- common/autotest_common.sh@10 -- $ set +x 00:00:47.759 ************************************ 00:00:47.759 START TEST make 00:00:47.759 ************************************ 00:00:47.759 10:41:03 make -- common/autotest_common.sh@1121 -- $ make -j48 00:00:47.759 make[1]: Nothing to be done for 'all'. 00:00:49.155 The Meson build system 00:00:49.155 Version: 1.3.1 00:00:49.155 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:00:49.155 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:49.155 Build type: native build 00:00:49.155 Project name: libvfio-user 00:00:49.155 Project version: 0.0.1 00:00:49.155 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:49.155 C linker for the host machine: cc ld.bfd 2.39-16 00:00:49.155 Host machine cpu family: x86_64 00:00:49.155 Host machine cpu: x86_64 00:00:49.155 Run-time dependency threads found: YES 00:00:49.155 Library dl found: YES 00:00:49.156 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:49.156 Run-time dependency json-c found: YES 0.17 00:00:49.156 Run-time dependency cmocka found: YES 1.1.7 00:00:49.156 Program pytest-3 found: NO 00:00:49.156 Program flake8 found: NO 00:00:49.156 Program misspell-fixer found: NO 00:00:49.156 Program restructuredtext-lint found: NO 00:00:49.156 Program valgrind found: YES (/usr/bin/valgrind) 00:00:49.156 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:49.156 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:49.156 Compiler for C supports arguments -Wwrite-strings: YES 00:00:49.156 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:49.156 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:00:49.156 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:00:49.156 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:00:49.156 Build targets in project: 8 00:00:49.156 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:00:49.156 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:00:49.156 00:00:49.156 libvfio-user 0.0.1 00:00:49.156 00:00:49.156 User defined options 00:00:49.156 buildtype : debug 00:00:49.156 default_library: shared 00:00:49.156 libdir : /usr/local/lib 00:00:49.156 00:00:49.156 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:50.102 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:50.102 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:00:50.102 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:00:50.102 [3/37] Compiling C object samples/null.p/null.c.o 00:00:50.102 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:00:50.102 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:00:50.102 [6/37] Compiling C object test/unit_tests.p/mocks.c.o 00:00:50.102 [7/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:00:50.102 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:00:50.102 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:00:50.102 [10/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:00:50.363 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:00:50.363 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:00:50.363 [13/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:00:50.363 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:00:50.363 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:00:50.363 [16/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:00:50.363 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:00:50.363 [18/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:00:50.363 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:00:50.363 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:00:50.363 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:00:50.363 [22/37] Compiling C object samples/server.p/server.c.o 00:00:50.363 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:00:50.363 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:00:50.363 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:00:50.363 [26/37] Compiling C object samples/client.p/client.c.o 00:00:50.363 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:00:50.363 [28/37] Linking target samples/client 00:00:50.363 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:00:50.625 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:00:50.625 [31/37] Linking target test/unit_tests 00:00:50.625 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:00:50.625 [33/37] Linking target samples/null 00:00:50.625 [34/37] Linking target samples/server 00:00:50.625 [35/37] Linking target samples/gpio-pci-idio-16 00:00:50.625 [36/37] Linking target samples/lspci 00:00:50.625 [37/37] Linking target samples/shadow_ioeventfd_server 00:00:50.887 INFO: autodetecting backend as ninja 00:00:50.887 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:50.887 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:00:51.458 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:00:51.458 ninja: no work to do. 00:00:56.736 The Meson build system 00:00:56.736 Version: 1.3.1 00:00:56.736 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:00:56.736 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:00:56.736 Build type: native build 00:00:56.736 Program cat found: YES (/usr/bin/cat) 00:00:56.736 Project name: DPDK 00:00:56.736 Project version: 23.11.0 00:00:56.736 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:00:56.736 C linker for the host machine: cc ld.bfd 2.39-16 00:00:56.736 Host machine cpu family: x86_64 00:00:56.736 Host machine cpu: x86_64 00:00:56.736 Message: ## Building in Developer Mode ## 00:00:56.736 Program pkg-config found: YES (/usr/bin/pkg-config) 00:00:56.736 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:00:56.736 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:00:56.736 Program python3 found: YES (/usr/bin/python3) 00:00:56.736 Program cat found: YES (/usr/bin/cat) 00:00:56.736 Compiler for C supports arguments -march=native: YES 00:00:56.736 Checking for size of "void *" : 8 00:00:56.736 Checking for size of "void *" : 8 (cached) 00:00:56.736 Library m found: YES 00:00:56.736 Library numa found: YES 00:00:56.736 Has header "numaif.h" : YES 00:00:56.736 Library fdt found: NO 00:00:56.736 Library execinfo found: NO 00:00:56.736 Has header "execinfo.h" : YES 00:00:56.736 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:00:56.736 Run-time dependency libarchive found: NO (tried pkgconfig) 00:00:56.736 Run-time dependency libbsd found: NO (tried pkgconfig) 00:00:56.736 Run-time dependency jansson found: NO (tried pkgconfig) 00:00:56.737 Run-time dependency openssl found: YES 3.0.9 00:00:56.737 Run-time dependency libpcap found: YES 1.10.4 00:00:56.737 Has header "pcap.h" with dependency libpcap: YES 00:00:56.737 Compiler for C supports arguments -Wcast-qual: YES 00:00:56.737 Compiler for C supports arguments -Wdeprecated: YES 00:00:56.737 Compiler for C supports arguments -Wformat: YES 00:00:56.737 Compiler for C supports arguments -Wformat-nonliteral: NO 00:00:56.737 Compiler for C supports arguments -Wformat-security: NO 00:00:56.737 Compiler for C supports arguments -Wmissing-declarations: YES 00:00:56.737 Compiler for C supports arguments -Wmissing-prototypes: YES 00:00:56.737 Compiler for C supports arguments -Wnested-externs: YES 00:00:56.737 Compiler for C supports arguments -Wold-style-definition: YES 00:00:56.737 Compiler for C supports arguments -Wpointer-arith: YES 00:00:56.737 Compiler for C supports arguments -Wsign-compare: YES 00:00:56.737 Compiler for C supports arguments -Wstrict-prototypes: YES 00:00:56.737 Compiler for C supports arguments -Wundef: YES 00:00:56.737 Compiler for C supports arguments -Wwrite-strings: YES 00:00:56.737 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:00:56.737 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:00:56.737 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:00:56.737 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:00:56.737 Program objdump found: YES (/usr/bin/objdump) 00:00:56.737 Compiler for C supports arguments -mavx512f: YES 00:00:56.737 Checking if "AVX512 checking" compiles: YES 00:00:56.737 Fetching value of define "__SSE4_2__" : 1 00:00:56.737 Fetching value of define "__AES__" : 1 00:00:56.737 Fetching value of define "__AVX__" : 1 00:00:56.737 Fetching value of define "__AVX2__" : (undefined) 00:00:56.737 Fetching value of define "__AVX512BW__" : (undefined) 00:00:56.737 Fetching value of define "__AVX512CD__" : (undefined) 00:00:56.737 Fetching value of define "__AVX512DQ__" : (undefined) 00:00:56.737 Fetching value of define "__AVX512F__" : (undefined) 00:00:56.737 Fetching value of define "__AVX512VL__" : (undefined) 00:00:56.737 Fetching value of define "__PCLMUL__" : 1 00:00:56.737 Fetching value of define "__RDRND__" : 1 00:00:56.737 Fetching value of define "__RDSEED__" : (undefined) 00:00:56.737 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:00:56.737 Fetching value of define "__znver1__" : (undefined) 00:00:56.737 Fetching value of define "__znver2__" : (undefined) 00:00:56.737 Fetching value of define "__znver3__" : (undefined) 00:00:56.737 Fetching value of define "__znver4__" : (undefined) 00:00:56.737 Compiler for C supports arguments -Wno-format-truncation: YES 00:00:56.737 Message: lib/log: Defining dependency "log" 00:00:56.737 Message: lib/kvargs: Defining dependency "kvargs" 00:00:56.737 Message: lib/telemetry: Defining dependency "telemetry" 00:00:56.737 Checking for function "getentropy" : NO 00:00:56.737 Message: lib/eal: Defining dependency "eal" 00:00:56.737 Message: lib/ring: Defining dependency "ring" 00:00:56.737 Message: lib/rcu: Defining dependency "rcu" 00:00:56.737 Message: lib/mempool: Defining dependency "mempool" 00:00:56.737 Message: lib/mbuf: Defining dependency "mbuf" 00:00:56.737 Fetching value of define "__PCLMUL__" : 1 (cached) 00:00:56.737 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:00:56.737 Compiler for C supports arguments -mpclmul: YES 00:00:56.737 Compiler for C supports arguments -maes: YES 00:00:56.737 Compiler for C supports arguments -mavx512f: YES (cached) 00:00:56.737 Compiler for C supports arguments -mavx512bw: YES 00:00:56.737 Compiler for C supports arguments -mavx512dq: YES 00:00:56.737 Compiler for C supports arguments -mavx512vl: YES 00:00:56.737 Compiler for C supports arguments -mvpclmulqdq: YES 00:00:56.737 Compiler for C supports arguments -mavx2: YES 00:00:56.737 Compiler for C supports arguments -mavx: YES 00:00:56.737 Message: lib/net: Defining dependency "net" 00:00:56.737 Message: lib/meter: Defining dependency "meter" 00:00:56.737 Message: lib/ethdev: Defining dependency "ethdev" 00:00:56.737 Message: lib/pci: Defining dependency "pci" 00:00:56.737 Message: lib/cmdline: Defining dependency "cmdline" 00:00:56.737 Message: lib/hash: Defining dependency "hash" 00:00:56.737 Message: lib/timer: Defining dependency "timer" 00:00:56.737 Message: lib/compressdev: Defining dependency "compressdev" 00:00:56.737 Message: lib/cryptodev: Defining dependency "cryptodev" 00:00:56.737 Message: lib/dmadev: Defining dependency "dmadev" 00:00:56.737 Compiler for C supports arguments -Wno-cast-qual: YES 00:00:56.737 Message: lib/power: Defining dependency "power" 00:00:56.737 Message: lib/reorder: Defining dependency "reorder" 00:00:56.737 Message: lib/security: Defining dependency "security" 00:00:56.737 Has header "linux/userfaultfd.h" : YES 00:00:56.737 Has header "linux/vduse.h" : YES 00:00:56.737 Message: lib/vhost: Defining dependency "vhost" 00:00:56.737 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:00:56.737 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:00:56.737 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:00:56.737 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:00:56.737 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:00:56.737 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:00:56.737 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:00:56.737 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:00:56.737 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:00:56.737 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:00:56.737 Program doxygen found: YES (/usr/bin/doxygen) 00:00:56.737 Configuring doxy-api-html.conf using configuration 00:00:56.737 Configuring doxy-api-man.conf using configuration 00:00:56.737 Program mandb found: YES (/usr/bin/mandb) 00:00:56.737 Program sphinx-build found: NO 00:00:56.737 Configuring rte_build_config.h using configuration 00:00:56.737 Message: 00:00:56.737 ================= 00:00:56.737 Applications Enabled 00:00:56.737 ================= 00:00:56.737 00:00:56.737 apps: 00:00:56.737 00:00:56.737 00:00:56.737 Message: 00:00:56.737 ================= 00:00:56.737 Libraries Enabled 00:00:56.737 ================= 00:00:56.737 00:00:56.737 libs: 00:00:56.737 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:00:56.737 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:00:56.737 cryptodev, dmadev, power, reorder, security, vhost, 00:00:56.737 00:00:56.737 Message: 00:00:56.737 =============== 00:00:56.737 Drivers Enabled 00:00:56.737 =============== 00:00:56.737 00:00:56.737 common: 00:00:56.737 00:00:56.737 bus: 00:00:56.737 pci, vdev, 00:00:56.737 mempool: 00:00:56.737 ring, 00:00:56.737 dma: 00:00:56.737 00:00:56.737 net: 00:00:56.737 00:00:56.737 crypto: 00:00:56.737 00:00:56.737 compress: 00:00:56.737 00:00:56.737 vdpa: 00:00:56.737 00:00:56.737 00:00:56.737 Message: 00:00:56.737 ================= 00:00:56.737 Content Skipped 00:00:56.737 ================= 00:00:56.737 00:00:56.737 apps: 00:00:56.737 dumpcap: explicitly disabled via build config 00:00:56.737 graph: explicitly disabled via build config 00:00:56.737 pdump: explicitly disabled via build config 00:00:56.737 proc-info: explicitly disabled via build config 00:00:56.737 test-acl: explicitly disabled via build config 00:00:56.737 test-bbdev: explicitly disabled via build config 00:00:56.737 test-cmdline: explicitly disabled via build config 00:00:56.737 test-compress-perf: explicitly disabled via build config 00:00:56.737 test-crypto-perf: explicitly disabled via build config 00:00:56.737 test-dma-perf: explicitly disabled via build config 00:00:56.737 test-eventdev: explicitly disabled via build config 00:00:56.737 test-fib: explicitly disabled via build config 00:00:56.737 test-flow-perf: explicitly disabled via build config 00:00:56.737 test-gpudev: explicitly disabled via build config 00:00:56.737 test-mldev: explicitly disabled via build config 00:00:56.737 test-pipeline: explicitly disabled via build config 00:00:56.737 test-pmd: explicitly disabled via build config 00:00:56.737 test-regex: explicitly disabled via build config 00:00:56.737 test-sad: explicitly disabled via build config 00:00:56.737 test-security-perf: explicitly disabled via build config 00:00:56.737 00:00:56.737 libs: 00:00:56.737 metrics: explicitly disabled via build config 00:00:56.737 acl: explicitly disabled via build config 00:00:56.737 bbdev: explicitly disabled via build config 00:00:56.737 bitratestats: explicitly disabled via build config 00:00:56.737 bpf: explicitly disabled via build config 00:00:56.737 cfgfile: explicitly disabled via build config 00:00:56.737 distributor: explicitly disabled via build config 00:00:56.737 efd: explicitly disabled via build config 00:00:56.737 eventdev: explicitly disabled via build config 00:00:56.737 dispatcher: explicitly disabled via build config 00:00:56.737 gpudev: explicitly disabled via build config 00:00:56.737 gro: explicitly disabled via build config 00:00:56.737 gso: explicitly disabled via build config 00:00:56.737 ip_frag: explicitly disabled via build config 00:00:56.737 jobstats: explicitly disabled via build config 00:00:56.737 latencystats: explicitly disabled via build config 00:00:56.737 lpm: explicitly disabled via build config 00:00:56.737 member: explicitly disabled via build config 00:00:56.737 pcapng: explicitly disabled via build config 00:00:56.737 rawdev: explicitly disabled via build config 00:00:56.737 regexdev: explicitly disabled via build config 00:00:56.737 mldev: explicitly disabled via build config 00:00:56.737 rib: explicitly disabled via build config 00:00:56.737 sched: explicitly disabled via build config 00:00:56.737 stack: explicitly disabled via build config 00:00:56.737 ipsec: explicitly disabled via build config 00:00:56.737 pdcp: explicitly disabled via build config 00:00:56.737 fib: explicitly disabled via build config 00:00:56.737 port: explicitly disabled via build config 00:00:56.737 pdump: explicitly disabled via build config 00:00:56.737 table: explicitly disabled via build config 00:00:56.737 pipeline: explicitly disabled via build config 00:00:56.737 graph: explicitly disabled via build config 00:00:56.737 node: explicitly disabled via build config 00:00:56.737 00:00:56.737 drivers: 00:00:56.737 common/cpt: not in enabled drivers build config 00:00:56.737 common/dpaax: not in enabled drivers build config 00:00:56.737 common/iavf: not in enabled drivers build config 00:00:56.737 common/idpf: not in enabled drivers build config 00:00:56.737 common/mvep: not in enabled drivers build config 00:00:56.738 common/octeontx: not in enabled drivers build config 00:00:56.738 bus/auxiliary: not in enabled drivers build config 00:00:56.738 bus/cdx: not in enabled drivers build config 00:00:56.738 bus/dpaa: not in enabled drivers build config 00:00:56.738 bus/fslmc: not in enabled drivers build config 00:00:56.738 bus/ifpga: not in enabled drivers build config 00:00:56.738 bus/platform: not in enabled drivers build config 00:00:56.738 bus/vmbus: not in enabled drivers build config 00:00:56.738 common/cnxk: not in enabled drivers build config 00:00:56.738 common/mlx5: not in enabled drivers build config 00:00:56.738 common/nfp: not in enabled drivers build config 00:00:56.738 common/qat: not in enabled drivers build config 00:00:56.738 common/sfc_efx: not in enabled drivers build config 00:00:56.738 mempool/bucket: not in enabled drivers build config 00:00:56.738 mempool/cnxk: not in enabled drivers build config 00:00:56.738 mempool/dpaa: not in enabled drivers build config 00:00:56.738 mempool/dpaa2: not in enabled drivers build config 00:00:56.738 mempool/octeontx: not in enabled drivers build config 00:00:56.738 mempool/stack: not in enabled drivers build config 00:00:56.738 dma/cnxk: not in enabled drivers build config 00:00:56.738 dma/dpaa: not in enabled drivers build config 00:00:56.738 dma/dpaa2: not in enabled drivers build config 00:00:56.738 dma/hisilicon: not in enabled drivers build config 00:00:56.738 dma/idxd: not in enabled drivers build config 00:00:56.738 dma/ioat: not in enabled drivers build config 00:00:56.738 dma/skeleton: not in enabled drivers build config 00:00:56.738 net/af_packet: not in enabled drivers build config 00:00:56.738 net/af_xdp: not in enabled drivers build config 00:00:56.738 net/ark: not in enabled drivers build config 00:00:56.738 net/atlantic: not in enabled drivers build config 00:00:56.738 net/avp: not in enabled drivers build config 00:00:56.738 net/axgbe: not in enabled drivers build config 00:00:56.738 net/bnx2x: not in enabled drivers build config 00:00:56.738 net/bnxt: not in enabled drivers build config 00:00:56.738 net/bonding: not in enabled drivers build config 00:00:56.738 net/cnxk: not in enabled drivers build config 00:00:56.738 net/cpfl: not in enabled drivers build config 00:00:56.738 net/cxgbe: not in enabled drivers build config 00:00:56.738 net/dpaa: not in enabled drivers build config 00:00:56.738 net/dpaa2: not in enabled drivers build config 00:00:56.738 net/e1000: not in enabled drivers build config 00:00:56.738 net/ena: not in enabled drivers build config 00:00:56.738 net/enetc: not in enabled drivers build config 00:00:56.738 net/enetfec: not in enabled drivers build config 00:00:56.738 net/enic: not in enabled drivers build config 00:00:56.738 net/failsafe: not in enabled drivers build config 00:00:56.738 net/fm10k: not in enabled drivers build config 00:00:56.738 net/gve: not in enabled drivers build config 00:00:56.738 net/hinic: not in enabled drivers build config 00:00:56.738 net/hns3: not in enabled drivers build config 00:00:56.738 net/i40e: not in enabled drivers build config 00:00:56.738 net/iavf: not in enabled drivers build config 00:00:56.738 net/ice: not in enabled drivers build config 00:00:56.738 net/idpf: not in enabled drivers build config 00:00:56.738 net/igc: not in enabled drivers build config 00:00:56.738 net/ionic: not in enabled drivers build config 00:00:56.738 net/ipn3ke: not in enabled drivers build config 00:00:56.738 net/ixgbe: not in enabled drivers build config 00:00:56.738 net/mana: not in enabled drivers build config 00:00:56.738 net/memif: not in enabled drivers build config 00:00:56.738 net/mlx4: not in enabled drivers build config 00:00:56.738 net/mlx5: not in enabled drivers build config 00:00:56.738 net/mvneta: not in enabled drivers build config 00:00:56.738 net/mvpp2: not in enabled drivers build config 00:00:56.738 net/netvsc: not in enabled drivers build config 00:00:56.738 net/nfb: not in enabled drivers build config 00:00:56.738 net/nfp: not in enabled drivers build config 00:00:56.738 net/ngbe: not in enabled drivers build config 00:00:56.738 net/null: not in enabled drivers build config 00:00:56.738 net/octeontx: not in enabled drivers build config 00:00:56.738 net/octeon_ep: not in enabled drivers build config 00:00:56.738 net/pcap: not in enabled drivers build config 00:00:56.738 net/pfe: not in enabled drivers build config 00:00:56.738 net/qede: not in enabled drivers build config 00:00:56.738 net/ring: not in enabled drivers build config 00:00:56.738 net/sfc: not in enabled drivers build config 00:00:56.738 net/softnic: not in enabled drivers build config 00:00:56.738 net/tap: not in enabled drivers build config 00:00:56.738 net/thunderx: not in enabled drivers build config 00:00:56.738 net/txgbe: not in enabled drivers build config 00:00:56.738 net/vdev_netvsc: not in enabled drivers build config 00:00:56.738 net/vhost: not in enabled drivers build config 00:00:56.738 net/virtio: not in enabled drivers build config 00:00:56.738 net/vmxnet3: not in enabled drivers build config 00:00:56.738 raw/*: missing internal dependency, "rawdev" 00:00:56.738 crypto/armv8: not in enabled drivers build config 00:00:56.738 crypto/bcmfs: not in enabled drivers build config 00:00:56.738 crypto/caam_jr: not in enabled drivers build config 00:00:56.738 crypto/ccp: not in enabled drivers build config 00:00:56.738 crypto/cnxk: not in enabled drivers build config 00:00:56.738 crypto/dpaa_sec: not in enabled drivers build config 00:00:56.738 crypto/dpaa2_sec: not in enabled drivers build config 00:00:56.738 crypto/ipsec_mb: not in enabled drivers build config 00:00:56.738 crypto/mlx5: not in enabled drivers build config 00:00:56.738 crypto/mvsam: not in enabled drivers build config 00:00:56.738 crypto/nitrox: not in enabled drivers build config 00:00:56.738 crypto/null: not in enabled drivers build config 00:00:56.738 crypto/octeontx: not in enabled drivers build config 00:00:56.738 crypto/openssl: not in enabled drivers build config 00:00:56.738 crypto/scheduler: not in enabled drivers build config 00:00:56.738 crypto/uadk: not in enabled drivers build config 00:00:56.738 crypto/virtio: not in enabled drivers build config 00:00:56.738 compress/isal: not in enabled drivers build config 00:00:56.738 compress/mlx5: not in enabled drivers build config 00:00:56.738 compress/octeontx: not in enabled drivers build config 00:00:56.738 compress/zlib: not in enabled drivers build config 00:00:56.738 regex/*: missing internal dependency, "regexdev" 00:00:56.738 ml/*: missing internal dependency, "mldev" 00:00:56.738 vdpa/ifc: not in enabled drivers build config 00:00:56.738 vdpa/mlx5: not in enabled drivers build config 00:00:56.738 vdpa/nfp: not in enabled drivers build config 00:00:56.738 vdpa/sfc: not in enabled drivers build config 00:00:56.738 event/*: missing internal dependency, "eventdev" 00:00:56.738 baseband/*: missing internal dependency, "bbdev" 00:00:56.738 gpu/*: missing internal dependency, "gpudev" 00:00:56.738 00:00:56.738 00:00:56.738 Build targets in project: 85 00:00:56.738 00:00:56.738 DPDK 23.11.0 00:00:56.738 00:00:56.738 User defined options 00:00:56.738 buildtype : debug 00:00:56.738 default_library : shared 00:00:56.738 libdir : lib 00:00:56.738 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:56.738 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:00:56.738 c_link_args : 00:00:56.738 cpu_instruction_set: native 00:00:56.738 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:00:56.738 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:00:56.738 enable_docs : false 00:00:56.738 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:00:56.738 enable_kmods : false 00:00:56.738 tests : false 00:00:56.738 00:00:56.738 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:00:56.738 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:00:56.738 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:00:56.738 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:00:56.738 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:00:56.738 [4/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:00:56.738 [5/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:00:56.738 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:00:56.738 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:00:56.738 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:00:56.738 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:00:56.738 [10/265] Linking static target lib/librte_kvargs.a 00:00:56.738 [11/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:00:56.738 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:00:56.738 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:00:56.738 [14/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:00:56.997 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:00:56.997 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:00:56.997 [17/265] Linking static target lib/librte_log.a 00:00:56.997 [18/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:00:56.997 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:00:56.997 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:00:57.256 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:00:57.518 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:00:57.518 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:00:57.518 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:00:57.784 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:00:57.784 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:00:57.784 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:00:57.784 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:00:57.784 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:00:57.784 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:00:57.784 [31/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:00:57.784 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:00:57.784 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:00:57.784 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:00:57.784 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:00:57.784 [36/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:00:57.784 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:00:57.784 [38/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:00:57.784 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:00:57.784 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:00:57.784 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:00:57.784 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:00:57.784 [43/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:00:57.784 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:00:57.784 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:00:57.784 [46/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:00:57.784 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:00:57.784 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:00:57.784 [49/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:00:57.784 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:00:57.784 [51/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:00:57.784 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:00:57.784 [53/265] Linking static target lib/librte_telemetry.a 00:00:57.784 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:00:57.784 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:00:57.784 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:00:57.784 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:00:57.784 [58/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:00:57.784 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:00:57.784 [60/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:00:57.784 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:00:57.784 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:00:57.784 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:00:57.784 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:00:57.784 [65/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:00:58.045 [66/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:00:58.045 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:00:58.045 [68/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:00:58.045 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:00:58.045 [70/265] Linking static target lib/librte_pci.a 00:00:58.045 [71/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:00:58.045 [72/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:00:58.045 [73/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:00:58.045 [74/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:00:58.045 [75/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.304 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:00:58.304 [77/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:00:58.304 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:00:58.304 [79/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:00:58.304 [80/265] Linking target lib/librte_log.so.24.0 00:00:58.304 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:00:58.304 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:00:58.304 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:00:58.304 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:00:58.304 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:00:58.304 [86/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:00:58.566 [87/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:00:58.566 [88/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.566 [89/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:00:58.566 [90/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:00:58.566 [91/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:00:58.566 [92/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:00:58.566 [93/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:00:58.566 [94/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:00:58.566 [95/265] Linking static target lib/librte_ring.a 00:00:58.566 [96/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:00:58.566 [97/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:00:58.829 [98/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:00:58.829 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:00:58.829 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:00:58.829 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:00:58.829 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:00:58.829 [103/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:00:58.829 [104/265] Linking target lib/librte_kvargs.so.24.0 00:00:58.829 [105/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:00:58.829 [106/265] Linking static target lib/librte_meter.a 00:00:58.829 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:00:58.829 [108/265] Linking static target lib/librte_eal.a 00:00:58.829 [109/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:00:58.829 [110/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:00:58.829 [111/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:00:58.829 [112/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:00:58.829 [113/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:00:58.829 [114/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:00:58.829 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:00:58.829 [116/265] Linking static target lib/librte_rcu.a 00:00:58.829 [117/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:00:58.829 [118/265] Linking static target lib/librte_mempool.a 00:00:58.829 [119/265] Linking target lib/librte_telemetry.so.24.0 00:00:58.829 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:00:58.829 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:00:58.829 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:00:59.088 [123/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:00:59.088 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:00:59.088 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:00:59.088 [126/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:00:59.088 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:00:59.088 [128/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:00:59.088 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:00:59.088 [130/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:00:59.088 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:00:59.088 [132/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:00:59.088 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:00:59.088 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:00:59.088 [135/265] Linking static target lib/librte_cmdline.a 00:00:59.354 [136/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.354 [137/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:00:59.354 [138/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:00:59.354 [139/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:00:59.354 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:00:59.354 [141/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.354 [142/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:00:59.354 [143/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:00:59.354 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:00:59.354 [145/265] Linking static target lib/librte_net.a 00:00:59.354 [146/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:00:59.354 [147/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:00:59.354 [148/265] Linking static target lib/librte_timer.a 00:00:59.613 [149/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.613 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:00:59.613 [151/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:00:59.613 [152/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:00:59.613 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:00:59.613 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:00:59.613 [155/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.872 [156/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:00:59.872 [157/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:00:59.872 [158/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:00:59.872 [159/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:00:59.872 [160/265] Linking static target lib/librte_dmadev.a 00:00:59.872 [161/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:00:59.872 [162/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.872 [163/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:00:59.872 [164/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:00:59.872 [165/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:00:59.872 [166/265] Linking static target lib/librte_hash.a 00:00:59.872 [167/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:00:59.872 [168/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:00:59.872 [169/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:00:59.872 [170/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:00.131 [171/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:00.131 [172/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:00.131 [173/265] Linking static target lib/librte_power.a 00:01:00.131 [174/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:00.131 [175/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:00.131 [176/265] Linking static target lib/librte_compressdev.a 00:01:00.131 [177/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:00.131 [178/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.131 [179/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:00.131 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:00.131 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:00.131 [182/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.131 [183/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:00.131 [184/265] Linking static target lib/librte_mbuf.a 00:01:00.131 [185/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:00.390 [186/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:00.391 [187/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:00.391 [188/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:00.391 [189/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:00.391 [190/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:00.391 [191/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:00.391 [192/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:00.391 [193/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:00.391 [194/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:00.391 [195/265] Linking static target lib/librte_reorder.a 00:01:00.391 [196/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.391 [197/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:00.391 [198/265] Linking static target lib/librte_security.a 00:01:00.391 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:00.391 [200/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:00.391 [201/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:00.391 [202/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:00.692 [203/265] Linking static target drivers/librte_bus_vdev.a 00:01:00.692 [204/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.692 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:00.692 [206/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:00.692 [207/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:00.692 [208/265] Linking static target drivers/librte_bus_pci.a 00:01:00.692 [209/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.692 [210/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:00.692 [211/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:00.692 [212/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:00.692 [213/265] Linking static target drivers/librte_mempool_ring.a 00:01:00.692 [214/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.692 [215/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.692 [216/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.692 [217/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:00.692 [218/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:00.966 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:00.966 [220/265] Linking static target lib/librte_ethdev.a 00:01:00.966 [221/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:00.966 [222/265] Linking static target lib/librte_cryptodev.a 00:01:00.966 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.899 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:03.271 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:05.169 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.169 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.169 [228/265] Linking target lib/librte_eal.so.24.0 00:01:05.169 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:05.169 [230/265] Linking target lib/librte_pci.so.24.0 00:01:05.169 [231/265] Linking target lib/librte_meter.so.24.0 00:01:05.169 [232/265] Linking target lib/librte_dmadev.so.24.0 00:01:05.169 [233/265] Linking target lib/librte_ring.so.24.0 00:01:05.169 [234/265] Linking target lib/librte_timer.so.24.0 00:01:05.169 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:05.427 [236/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:05.427 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:05.427 [238/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:05.427 [239/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:05.427 [240/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:05.427 [241/265] Linking target lib/librte_rcu.so.24.0 00:01:05.427 [242/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:05.427 [243/265] Linking target lib/librte_mempool.so.24.0 00:01:05.684 [244/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:05.684 [245/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:05.684 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:05.685 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:05.685 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:05.685 [249/265] Linking target lib/librte_compressdev.so.24.0 00:01:05.685 [250/265] Linking target lib/librte_reorder.so.24.0 00:01:05.685 [251/265] Linking target lib/librte_net.so.24.0 00:01:05.685 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:05.942 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:05.942 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:05.942 [255/265] Linking target lib/librte_security.so.24.0 00:01:05.942 [256/265] Linking target lib/librte_cmdline.so.24.0 00:01:05.942 [257/265] Linking target lib/librte_hash.so.24.0 00:01:05.942 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:06.200 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:06.200 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:06.200 [261/265] Linking target lib/librte_power.so.24.0 00:01:08.725 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:08.725 [263/265] Linking static target lib/librte_vhost.a 00:01:09.658 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:09.658 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:09.658 INFO: autodetecting backend as ninja 00:01:09.659 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:10.593 CC lib/log/log.o 00:01:10.593 CC lib/ut/ut.o 00:01:10.593 CC lib/log/log_flags.o 00:01:10.593 CC lib/log/log_deprecated.o 00:01:10.593 CC lib/ut_mock/mock.o 00:01:10.593 LIB libspdk_ut_mock.a 00:01:10.593 SO libspdk_ut_mock.so.6.0 00:01:10.593 LIB libspdk_log.a 00:01:10.593 LIB libspdk_ut.a 00:01:10.593 SO libspdk_log.so.7.0 00:01:10.593 SO libspdk_ut.so.2.0 00:01:10.593 SYMLINK libspdk_ut_mock.so 00:01:10.593 SYMLINK libspdk_ut.so 00:01:10.593 SYMLINK libspdk_log.so 00:01:10.852 CC lib/dma/dma.o 00:01:10.852 CXX lib/trace_parser/trace.o 00:01:10.852 CC lib/ioat/ioat.o 00:01:10.852 CC lib/util/base64.o 00:01:10.852 CC lib/util/bit_array.o 00:01:10.852 CC lib/util/cpuset.o 00:01:10.852 CC lib/util/crc16.o 00:01:10.852 CC lib/util/crc32.o 00:01:10.852 CC lib/util/crc32c.o 00:01:10.852 CC lib/util/crc32_ieee.o 00:01:10.852 CC lib/util/crc64.o 00:01:10.852 CC lib/util/dif.o 00:01:10.852 CC lib/util/fd.o 00:01:10.852 CC lib/util/file.o 00:01:10.852 CC lib/util/hexlify.o 00:01:10.852 CC lib/util/iov.o 00:01:10.852 CC lib/util/math.o 00:01:10.852 CC lib/util/pipe.o 00:01:10.852 CC lib/util/strerror_tls.o 00:01:10.852 CC lib/util/string.o 00:01:10.852 CC lib/util/uuid.o 00:01:10.852 CC lib/util/fd_group.o 00:01:10.852 CC lib/util/xor.o 00:01:10.852 CC lib/util/zipf.o 00:01:10.852 CC lib/vfio_user/host/vfio_user_pci.o 00:01:10.852 CC lib/vfio_user/host/vfio_user.o 00:01:11.110 LIB libspdk_dma.a 00:01:11.110 SO libspdk_dma.so.4.0 00:01:11.110 LIB libspdk_ioat.a 00:01:11.110 SYMLINK libspdk_dma.so 00:01:11.110 SO libspdk_ioat.so.7.0 00:01:11.110 SYMLINK libspdk_ioat.so 00:01:11.368 LIB libspdk_vfio_user.a 00:01:11.368 SO libspdk_vfio_user.so.5.0 00:01:11.368 SYMLINK libspdk_vfio_user.so 00:01:11.368 LIB libspdk_util.a 00:01:11.368 SO libspdk_util.so.9.0 00:01:11.626 SYMLINK libspdk_util.so 00:01:11.885 CC lib/conf/conf.o 00:01:11.885 CC lib/json/json_parse.o 00:01:11.885 CC lib/rdma/common.o 00:01:11.885 CC lib/vmd/vmd.o 00:01:11.885 CC lib/idxd/idxd.o 00:01:11.885 CC lib/env_dpdk/env.o 00:01:11.885 CC lib/rdma/rdma_verbs.o 00:01:11.885 CC lib/vmd/led.o 00:01:11.885 CC lib/env_dpdk/memory.o 00:01:11.885 CC lib/json/json_util.o 00:01:11.885 CC lib/idxd/idxd_user.o 00:01:11.885 CC lib/json/json_write.o 00:01:11.885 CC lib/env_dpdk/pci.o 00:01:11.885 CC lib/env_dpdk/init.o 00:01:11.885 CC lib/env_dpdk/threads.o 00:01:11.885 CC lib/env_dpdk/pci_ioat.o 00:01:11.885 LIB libspdk_trace_parser.a 00:01:11.885 CC lib/env_dpdk/pci_virtio.o 00:01:11.885 CC lib/env_dpdk/pci_vmd.o 00:01:11.885 CC lib/env_dpdk/pci_idxd.o 00:01:11.885 CC lib/env_dpdk/pci_event.o 00:01:11.885 CC lib/env_dpdk/sigbus_handler.o 00:01:11.885 CC lib/env_dpdk/pci_dpdk.o 00:01:11.885 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:11.885 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:11.885 SO libspdk_trace_parser.so.5.0 00:01:11.885 SYMLINK libspdk_trace_parser.so 00:01:12.144 LIB libspdk_conf.a 00:01:12.144 SO libspdk_conf.so.6.0 00:01:12.144 LIB libspdk_rdma.a 00:01:12.144 SYMLINK libspdk_conf.so 00:01:12.144 SO libspdk_rdma.so.6.0 00:01:12.144 LIB libspdk_json.a 00:01:12.144 SYMLINK libspdk_rdma.so 00:01:12.144 SO libspdk_json.so.6.0 00:01:12.144 SYMLINK libspdk_json.so 00:01:12.402 LIB libspdk_idxd.a 00:01:12.402 CC lib/jsonrpc/jsonrpc_server.o 00:01:12.402 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:12.402 CC lib/jsonrpc/jsonrpc_client.o 00:01:12.402 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:12.402 SO libspdk_idxd.so.12.0 00:01:12.402 LIB libspdk_vmd.a 00:01:12.402 SO libspdk_vmd.so.6.0 00:01:12.402 SYMLINK libspdk_idxd.so 00:01:12.660 SYMLINK libspdk_vmd.so 00:01:12.660 LIB libspdk_jsonrpc.a 00:01:12.660 SO libspdk_jsonrpc.so.6.0 00:01:12.918 SYMLINK libspdk_jsonrpc.so 00:01:12.918 CC lib/rpc/rpc.o 00:01:13.177 LIB libspdk_rpc.a 00:01:13.177 SO libspdk_rpc.so.6.0 00:01:13.177 SYMLINK libspdk_rpc.so 00:01:13.435 CC lib/trace/trace.o 00:01:13.435 CC lib/keyring/keyring.o 00:01:13.435 CC lib/trace/trace_flags.o 00:01:13.435 CC lib/trace/trace_rpc.o 00:01:13.435 CC lib/keyring/keyring_rpc.o 00:01:13.435 CC lib/notify/notify.o 00:01:13.435 CC lib/notify/notify_rpc.o 00:01:13.693 LIB libspdk_notify.a 00:01:13.693 SO libspdk_notify.so.6.0 00:01:13.693 LIB libspdk_keyring.a 00:01:13.693 SYMLINK libspdk_notify.so 00:01:13.693 LIB libspdk_trace.a 00:01:13.693 SO libspdk_keyring.so.1.0 00:01:13.693 SO libspdk_trace.so.10.0 00:01:13.693 SYMLINK libspdk_keyring.so 00:01:13.693 SYMLINK libspdk_trace.so 00:01:13.693 LIB libspdk_env_dpdk.a 00:01:13.951 SO libspdk_env_dpdk.so.14.0 00:01:13.951 CC lib/thread/thread.o 00:01:13.951 CC lib/thread/iobuf.o 00:01:13.951 CC lib/sock/sock.o 00:01:13.951 CC lib/sock/sock_rpc.o 00:01:13.951 SYMLINK libspdk_env_dpdk.so 00:01:14.209 LIB libspdk_sock.a 00:01:14.467 SO libspdk_sock.so.9.0 00:01:14.467 SYMLINK libspdk_sock.so 00:01:14.467 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:14.467 CC lib/nvme/nvme_ctrlr.o 00:01:14.467 CC lib/nvme/nvme_fabric.o 00:01:14.467 CC lib/nvme/nvme_ns_cmd.o 00:01:14.467 CC lib/nvme/nvme_ns.o 00:01:14.467 CC lib/nvme/nvme_pcie_common.o 00:01:14.467 CC lib/nvme/nvme_pcie.o 00:01:14.467 CC lib/nvme/nvme_qpair.o 00:01:14.467 CC lib/nvme/nvme.o 00:01:14.467 CC lib/nvme/nvme_quirks.o 00:01:14.467 CC lib/nvme/nvme_transport.o 00:01:14.467 CC lib/nvme/nvme_discovery.o 00:01:14.467 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:14.468 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:14.468 CC lib/nvme/nvme_tcp.o 00:01:14.468 CC lib/nvme/nvme_opal.o 00:01:14.468 CC lib/nvme/nvme_io_msg.o 00:01:14.468 CC lib/nvme/nvme_poll_group.o 00:01:14.468 CC lib/nvme/nvme_zns.o 00:01:14.468 CC lib/nvme/nvme_stubs.o 00:01:14.468 CC lib/nvme/nvme_auth.o 00:01:14.468 CC lib/nvme/nvme_cuse.o 00:01:14.468 CC lib/nvme/nvme_vfio_user.o 00:01:14.468 CC lib/nvme/nvme_rdma.o 00:01:15.402 LIB libspdk_thread.a 00:01:15.402 SO libspdk_thread.so.10.0 00:01:15.690 SYMLINK libspdk_thread.so 00:01:15.690 CC lib/init/json_config.o 00:01:15.690 CC lib/blob/blobstore.o 00:01:15.690 CC lib/init/subsystem.o 00:01:15.690 CC lib/blob/request.o 00:01:15.690 CC lib/init/subsystem_rpc.o 00:01:15.690 CC lib/blob/zeroes.o 00:01:15.690 CC lib/init/rpc.o 00:01:15.690 CC lib/blob/blob_bs_dev.o 00:01:15.690 CC lib/virtio/virtio.o 00:01:15.690 CC lib/vfu_tgt/tgt_endpoint.o 00:01:15.690 CC lib/virtio/virtio_vhost_user.o 00:01:15.690 CC lib/accel/accel.o 00:01:15.690 CC lib/vfu_tgt/tgt_rpc.o 00:01:15.690 CC lib/virtio/virtio_vfio_user.o 00:01:15.690 CC lib/accel/accel_rpc.o 00:01:15.690 CC lib/virtio/virtio_pci.o 00:01:15.690 CC lib/accel/accel_sw.o 00:01:15.952 LIB libspdk_init.a 00:01:15.952 SO libspdk_init.so.5.0 00:01:16.211 LIB libspdk_virtio.a 00:01:16.211 LIB libspdk_vfu_tgt.a 00:01:16.211 SYMLINK libspdk_init.so 00:01:16.211 SO libspdk_vfu_tgt.so.3.0 00:01:16.211 SO libspdk_virtio.so.7.0 00:01:16.211 SYMLINK libspdk_vfu_tgt.so 00:01:16.211 SYMLINK libspdk_virtio.so 00:01:16.211 CC lib/event/app.o 00:01:16.211 CC lib/event/reactor.o 00:01:16.211 CC lib/event/log_rpc.o 00:01:16.211 CC lib/event/app_rpc.o 00:01:16.211 CC lib/event/scheduler_static.o 00:01:16.777 LIB libspdk_event.a 00:01:16.777 SO libspdk_event.so.13.0 00:01:16.777 SYMLINK libspdk_event.so 00:01:16.777 LIB libspdk_accel.a 00:01:16.777 SO libspdk_accel.so.15.0 00:01:16.777 LIB libspdk_nvme.a 00:01:16.777 SYMLINK libspdk_accel.so 00:01:17.035 SO libspdk_nvme.so.13.0 00:01:17.035 CC lib/bdev/bdev.o 00:01:17.035 CC lib/bdev/bdev_rpc.o 00:01:17.035 CC lib/bdev/bdev_zone.o 00:01:17.035 CC lib/bdev/part.o 00:01:17.035 CC lib/bdev/scsi_nvme.o 00:01:17.293 SYMLINK libspdk_nvme.so 00:01:18.667 LIB libspdk_blob.a 00:01:18.667 SO libspdk_blob.so.11.0 00:01:18.925 SYMLINK libspdk_blob.so 00:01:18.925 CC lib/blobfs/blobfs.o 00:01:18.925 CC lib/blobfs/tree.o 00:01:18.925 CC lib/lvol/lvol.o 00:01:19.864 LIB libspdk_bdev.a 00:01:19.864 SO libspdk_bdev.so.15.0 00:01:19.864 SYMLINK libspdk_bdev.so 00:01:19.864 LIB libspdk_blobfs.a 00:01:19.864 LIB libspdk_lvol.a 00:01:19.864 SO libspdk_blobfs.so.10.0 00:01:19.864 SO libspdk_lvol.so.10.0 00:01:19.864 CC lib/scsi/dev.o 00:01:19.864 CC lib/nvmf/ctrlr.o 00:01:19.864 CC lib/scsi/lun.o 00:01:19.864 CC lib/nvmf/ctrlr_discovery.o 00:01:19.864 CC lib/scsi/port.o 00:01:19.864 CC lib/nvmf/ctrlr_bdev.o 00:01:19.864 CC lib/scsi/scsi.o 00:01:19.864 CC lib/ftl/ftl_core.o 00:01:19.864 CC lib/nvmf/subsystem.o 00:01:19.864 CC lib/nvmf/nvmf.o 00:01:19.864 CC lib/ftl/ftl_init.o 00:01:19.864 CC lib/scsi/scsi_bdev.o 00:01:19.864 CC lib/scsi/scsi_pr.o 00:01:19.864 CC lib/nvmf/nvmf_rpc.o 00:01:19.864 CC lib/ftl/ftl_layout.o 00:01:19.864 CC lib/scsi/scsi_rpc.o 00:01:19.864 CC lib/nvmf/transport.o 00:01:19.864 CC lib/ublk/ublk.o 00:01:19.864 CC lib/ftl/ftl_debug.o 00:01:19.864 CC lib/scsi/task.o 00:01:19.864 CC lib/ublk/ublk_rpc.o 00:01:19.864 CC lib/ftl/ftl_io.o 00:01:19.864 CC lib/nvmf/tcp.o 00:01:19.864 CC lib/ftl/ftl_sb.o 00:01:19.864 CC lib/nvmf/stubs.o 00:01:19.864 CC lib/nbd/nbd.o 00:01:19.864 CC lib/nvmf/vfio_user.o 00:01:19.864 CC lib/nbd/nbd_rpc.o 00:01:19.864 CC lib/nvmf/rdma.o 00:01:19.864 CC lib/ftl/ftl_l2p.o 00:01:19.864 CC lib/ftl/ftl_l2p_flat.o 00:01:19.864 CC lib/ftl/ftl_nv_cache.o 00:01:19.864 CC lib/nvmf/auth.o 00:01:19.864 CC lib/ftl/ftl_band.o 00:01:19.864 CC lib/ftl/ftl_band_ops.o 00:01:19.864 CC lib/ftl/ftl_writer.o 00:01:19.864 CC lib/ftl/ftl_rq.o 00:01:19.864 CC lib/ftl/ftl_reloc.o 00:01:19.864 CC lib/ftl/ftl_l2p_cache.o 00:01:19.864 CC lib/ftl/ftl_p2l.o 00:01:19.864 CC lib/ftl/mngt/ftl_mngt.o 00:01:19.864 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:19.864 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:19.864 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:19.864 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:19.864 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:19.864 SYMLINK libspdk_lvol.so 00:01:19.864 SYMLINK libspdk_blobfs.so 00:01:19.864 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:19.864 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:20.437 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:20.437 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:20.437 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:20.437 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:20.437 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:20.437 CC lib/ftl/utils/ftl_conf.o 00:01:20.437 CC lib/ftl/utils/ftl_md.o 00:01:20.437 CC lib/ftl/utils/ftl_mempool.o 00:01:20.437 CC lib/ftl/utils/ftl_bitmap.o 00:01:20.437 CC lib/ftl/utils/ftl_property.o 00:01:20.437 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:20.437 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:20.437 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:20.437 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:20.437 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:20.437 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:20.437 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:20.437 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:20.437 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:20.437 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:20.437 CC lib/ftl/base/ftl_base_dev.o 00:01:20.437 CC lib/ftl/base/ftl_base_bdev.o 00:01:20.437 CC lib/ftl/ftl_trace.o 00:01:20.700 LIB libspdk_nbd.a 00:01:20.700 SO libspdk_nbd.so.7.0 00:01:20.959 SYMLINK libspdk_nbd.so 00:01:20.959 LIB libspdk_scsi.a 00:01:20.959 SO libspdk_scsi.so.9.0 00:01:20.959 LIB libspdk_ublk.a 00:01:20.959 SYMLINK libspdk_scsi.so 00:01:20.959 SO libspdk_ublk.so.3.0 00:01:20.959 SYMLINK libspdk_ublk.so 00:01:21.217 CC lib/vhost/vhost.o 00:01:21.217 CC lib/iscsi/conn.o 00:01:21.217 CC lib/iscsi/init_grp.o 00:01:21.217 CC lib/vhost/vhost_rpc.o 00:01:21.217 CC lib/iscsi/iscsi.o 00:01:21.217 CC lib/vhost/vhost_scsi.o 00:01:21.217 CC lib/vhost/vhost_blk.o 00:01:21.217 CC lib/iscsi/md5.o 00:01:21.217 CC lib/iscsi/param.o 00:01:21.217 CC lib/vhost/rte_vhost_user.o 00:01:21.217 CC lib/iscsi/portal_grp.o 00:01:21.217 CC lib/iscsi/tgt_node.o 00:01:21.217 CC lib/iscsi/iscsi_subsystem.o 00:01:21.217 CC lib/iscsi/iscsi_rpc.o 00:01:21.217 CC lib/iscsi/task.o 00:01:21.217 LIB libspdk_ftl.a 00:01:21.475 SO libspdk_ftl.so.9.0 00:01:21.734 SYMLINK libspdk_ftl.so 00:01:22.300 LIB libspdk_vhost.a 00:01:22.300 SO libspdk_vhost.so.8.0 00:01:22.558 LIB libspdk_nvmf.a 00:01:22.558 SYMLINK libspdk_vhost.so 00:01:22.558 SO libspdk_nvmf.so.18.0 00:01:22.558 LIB libspdk_iscsi.a 00:01:22.558 SO libspdk_iscsi.so.8.0 00:01:22.816 SYMLINK libspdk_nvmf.so 00:01:22.816 SYMLINK libspdk_iscsi.so 00:01:23.074 CC module/vfu_device/vfu_virtio.o 00:01:23.074 CC module/vfu_device/vfu_virtio_blk.o 00:01:23.074 CC module/env_dpdk/env_dpdk_rpc.o 00:01:23.074 CC module/vfu_device/vfu_virtio_scsi.o 00:01:23.074 CC module/vfu_device/vfu_virtio_rpc.o 00:01:23.074 CC module/accel/ioat/accel_ioat.o 00:01:23.074 CC module/sock/posix/posix.o 00:01:23.074 CC module/accel/ioat/accel_ioat_rpc.o 00:01:23.074 CC module/accel/error/accel_error.o 00:01:23.075 CC module/accel/iaa/accel_iaa.o 00:01:23.075 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:23.075 CC module/blob/bdev/blob_bdev.o 00:01:23.075 CC module/accel/error/accel_error_rpc.o 00:01:23.075 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:23.075 CC module/accel/iaa/accel_iaa_rpc.o 00:01:23.075 CC module/scheduler/gscheduler/gscheduler.o 00:01:23.075 CC module/keyring/file/keyring.o 00:01:23.075 CC module/keyring/file/keyring_rpc.o 00:01:23.075 CC module/accel/dsa/accel_dsa.o 00:01:23.075 CC module/accel/dsa/accel_dsa_rpc.o 00:01:23.075 LIB libspdk_env_dpdk_rpc.a 00:01:23.075 SO libspdk_env_dpdk_rpc.so.6.0 00:01:23.333 SYMLINK libspdk_env_dpdk_rpc.so 00:01:23.333 LIB libspdk_keyring_file.a 00:01:23.333 LIB libspdk_scheduler_dpdk_governor.a 00:01:23.333 LIB libspdk_scheduler_gscheduler.a 00:01:23.333 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:23.333 SO libspdk_scheduler_gscheduler.so.4.0 00:01:23.333 SO libspdk_keyring_file.so.1.0 00:01:23.333 LIB libspdk_accel_error.a 00:01:23.333 LIB libspdk_accel_ioat.a 00:01:23.333 LIB libspdk_scheduler_dynamic.a 00:01:23.333 SO libspdk_accel_error.so.2.0 00:01:23.333 LIB libspdk_accel_iaa.a 00:01:23.333 SO libspdk_scheduler_dynamic.so.4.0 00:01:23.333 SO libspdk_accel_ioat.so.6.0 00:01:23.333 SYMLINK libspdk_scheduler_gscheduler.so 00:01:23.333 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:23.333 SYMLINK libspdk_keyring_file.so 00:01:23.333 SO libspdk_accel_iaa.so.3.0 00:01:23.333 LIB libspdk_accel_dsa.a 00:01:23.333 SYMLINK libspdk_accel_error.so 00:01:23.333 SYMLINK libspdk_scheduler_dynamic.so 00:01:23.333 SYMLINK libspdk_accel_ioat.so 00:01:23.333 SO libspdk_accel_dsa.so.5.0 00:01:23.333 LIB libspdk_blob_bdev.a 00:01:23.333 SYMLINK libspdk_accel_iaa.so 00:01:23.333 SO libspdk_blob_bdev.so.11.0 00:01:23.333 SYMLINK libspdk_accel_dsa.so 00:01:23.593 SYMLINK libspdk_blob_bdev.so 00:01:23.593 LIB libspdk_vfu_device.a 00:01:23.593 SO libspdk_vfu_device.so.3.0 00:01:23.593 CC module/bdev/lvol/vbdev_lvol.o 00:01:23.593 CC module/bdev/error/vbdev_error.o 00:01:23.593 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:23.593 CC module/bdev/error/vbdev_error_rpc.o 00:01:23.593 CC module/bdev/null/bdev_null.o 00:01:23.593 CC module/bdev/null/bdev_null_rpc.o 00:01:23.593 CC module/blobfs/bdev/blobfs_bdev.o 00:01:23.593 CC module/bdev/raid/bdev_raid.o 00:01:23.593 CC module/bdev/delay/vbdev_delay.o 00:01:23.593 CC module/bdev/nvme/bdev_nvme.o 00:01:23.593 CC module/bdev/malloc/bdev_malloc.o 00:01:23.593 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:23.593 CC module/bdev/split/vbdev_split.o 00:01:23.593 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:23.593 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:23.852 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:23.852 CC module/bdev/raid/bdev_raid_rpc.o 00:01:23.852 CC module/bdev/nvme/nvme_rpc.o 00:01:23.852 CC module/bdev/gpt/gpt.o 00:01:23.852 CC module/bdev/raid/raid0.o 00:01:23.852 CC module/bdev/raid/bdev_raid_sb.o 00:01:23.852 CC module/bdev/split/vbdev_split_rpc.o 00:01:23.852 CC module/bdev/gpt/vbdev_gpt.o 00:01:23.852 CC module/bdev/passthru/vbdev_passthru.o 00:01:23.852 CC module/bdev/raid/concat.o 00:01:23.852 CC module/bdev/nvme/bdev_mdns_client.o 00:01:23.852 CC module/bdev/raid/raid1.o 00:01:23.852 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:23.853 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:23.853 CC module/bdev/nvme/vbdev_opal.o 00:01:23.853 CC module/bdev/ftl/bdev_ftl.o 00:01:23.853 CC module/bdev/iscsi/bdev_iscsi.o 00:01:23.853 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:23.853 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:23.853 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:23.853 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:23.853 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:23.853 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:23.853 CC module/bdev/aio/bdev_aio.o 00:01:23.853 CC module/bdev/aio/bdev_aio_rpc.o 00:01:23.853 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:23.853 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:23.853 SYMLINK libspdk_vfu_device.so 00:01:23.853 LIB libspdk_sock_posix.a 00:01:23.853 SO libspdk_sock_posix.so.6.0 00:01:24.111 SYMLINK libspdk_sock_posix.so 00:01:24.111 LIB libspdk_bdev_error.a 00:01:24.111 LIB libspdk_blobfs_bdev.a 00:01:24.111 SO libspdk_bdev_error.so.6.0 00:01:24.111 SO libspdk_blobfs_bdev.so.6.0 00:01:24.111 LIB libspdk_bdev_null.a 00:01:24.111 LIB libspdk_bdev_split.a 00:01:24.111 SYMLINK libspdk_bdev_error.so 00:01:24.111 SO libspdk_bdev_null.so.6.0 00:01:24.111 SYMLINK libspdk_blobfs_bdev.so 00:01:24.111 SO libspdk_bdev_split.so.6.0 00:01:24.111 LIB libspdk_bdev_ftl.a 00:01:24.111 LIB libspdk_bdev_passthru.a 00:01:24.111 SO libspdk_bdev_ftl.so.6.0 00:01:24.369 LIB libspdk_bdev_gpt.a 00:01:24.369 SO libspdk_bdev_passthru.so.6.0 00:01:24.369 SYMLINK libspdk_bdev_null.so 00:01:24.369 LIB libspdk_bdev_aio.a 00:01:24.369 LIB libspdk_bdev_zone_block.a 00:01:24.369 SYMLINK libspdk_bdev_split.so 00:01:24.369 SO libspdk_bdev_gpt.so.6.0 00:01:24.369 SO libspdk_bdev_aio.so.6.0 00:01:24.369 SO libspdk_bdev_zone_block.so.6.0 00:01:24.369 SYMLINK libspdk_bdev_ftl.so 00:01:24.369 SYMLINK libspdk_bdev_passthru.so 00:01:24.369 LIB libspdk_bdev_malloc.a 00:01:24.369 SYMLINK libspdk_bdev_gpt.so 00:01:24.369 SYMLINK libspdk_bdev_aio.so 00:01:24.369 SYMLINK libspdk_bdev_zone_block.so 00:01:24.369 SO libspdk_bdev_malloc.so.6.0 00:01:24.369 LIB libspdk_bdev_iscsi.a 00:01:24.369 LIB libspdk_bdev_delay.a 00:01:24.369 SO libspdk_bdev_iscsi.so.6.0 00:01:24.369 SO libspdk_bdev_delay.so.6.0 00:01:24.369 SYMLINK libspdk_bdev_malloc.so 00:01:24.369 SYMLINK libspdk_bdev_iscsi.so 00:01:24.369 SYMLINK libspdk_bdev_delay.so 00:01:24.369 LIB libspdk_bdev_lvol.a 00:01:24.627 SO libspdk_bdev_lvol.so.6.0 00:01:24.628 LIB libspdk_bdev_virtio.a 00:01:24.628 SO libspdk_bdev_virtio.so.6.0 00:01:24.628 SYMLINK libspdk_bdev_lvol.so 00:01:24.628 SYMLINK libspdk_bdev_virtio.so 00:01:24.886 LIB libspdk_bdev_raid.a 00:01:24.886 SO libspdk_bdev_raid.so.6.0 00:01:24.886 SYMLINK libspdk_bdev_raid.so 00:01:26.261 LIB libspdk_bdev_nvme.a 00:01:26.261 SO libspdk_bdev_nvme.so.7.0 00:01:26.261 SYMLINK libspdk_bdev_nvme.so 00:01:26.520 CC module/event/subsystems/iobuf/iobuf.o 00:01:26.520 CC module/event/subsystems/sock/sock.o 00:01:26.520 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:01:26.520 CC module/event/subsystems/scheduler/scheduler.o 00:01:26.520 CC module/event/subsystems/vmd/vmd.o 00:01:26.520 CC module/event/subsystems/keyring/keyring.o 00:01:26.520 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:26.520 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:26.520 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:26.778 LIB libspdk_event_keyring.a 00:01:26.778 LIB libspdk_event_scheduler.a 00:01:26.778 LIB libspdk_event_vfu_tgt.a 00:01:26.778 LIB libspdk_event_vhost_blk.a 00:01:26.778 LIB libspdk_event_sock.a 00:01:26.778 LIB libspdk_event_vmd.a 00:01:26.778 SO libspdk_event_keyring.so.1.0 00:01:26.778 LIB libspdk_event_iobuf.a 00:01:26.778 SO libspdk_event_vfu_tgt.so.3.0 00:01:26.778 SO libspdk_event_vhost_blk.so.3.0 00:01:26.778 SO libspdk_event_scheduler.so.4.0 00:01:26.778 SO libspdk_event_sock.so.5.0 00:01:26.778 SO libspdk_event_vmd.so.6.0 00:01:26.778 SO libspdk_event_iobuf.so.3.0 00:01:26.778 SYMLINK libspdk_event_keyring.so 00:01:26.778 SYMLINK libspdk_event_vfu_tgt.so 00:01:26.778 SYMLINK libspdk_event_vhost_blk.so 00:01:26.778 SYMLINK libspdk_event_scheduler.so 00:01:26.778 SYMLINK libspdk_event_sock.so 00:01:26.778 SYMLINK libspdk_event_vmd.so 00:01:26.778 SYMLINK libspdk_event_iobuf.so 00:01:27.037 CC module/event/subsystems/accel/accel.o 00:01:27.037 LIB libspdk_event_accel.a 00:01:27.295 SO libspdk_event_accel.so.6.0 00:01:27.295 SYMLINK libspdk_event_accel.so 00:01:27.553 CC module/event/subsystems/bdev/bdev.o 00:01:27.553 LIB libspdk_event_bdev.a 00:01:27.553 SO libspdk_event_bdev.so.6.0 00:01:27.553 SYMLINK libspdk_event_bdev.so 00:01:27.811 CC module/event/subsystems/nbd/nbd.o 00:01:27.811 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:27.811 CC module/event/subsystems/ublk/ublk.o 00:01:27.811 CC module/event/subsystems/scsi/scsi.o 00:01:27.811 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:28.069 LIB libspdk_event_nbd.a 00:01:28.069 LIB libspdk_event_ublk.a 00:01:28.069 LIB libspdk_event_scsi.a 00:01:28.069 SO libspdk_event_ublk.so.3.0 00:01:28.069 SO libspdk_event_nbd.so.6.0 00:01:28.069 SO libspdk_event_scsi.so.6.0 00:01:28.069 SYMLINK libspdk_event_nbd.so 00:01:28.069 SYMLINK libspdk_event_ublk.so 00:01:28.069 SYMLINK libspdk_event_scsi.so 00:01:28.069 LIB libspdk_event_nvmf.a 00:01:28.069 SO libspdk_event_nvmf.so.6.0 00:01:28.069 SYMLINK libspdk_event_nvmf.so 00:01:28.328 CC module/event/subsystems/iscsi/iscsi.o 00:01:28.328 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:28.328 LIB libspdk_event_vhost_scsi.a 00:01:28.328 LIB libspdk_event_iscsi.a 00:01:28.328 SO libspdk_event_vhost_scsi.so.3.0 00:01:28.328 SO libspdk_event_iscsi.so.6.0 00:01:28.588 SYMLINK libspdk_event_vhost_scsi.so 00:01:28.588 SYMLINK libspdk_event_iscsi.so 00:01:28.588 SO libspdk.so.6.0 00:01:28.588 SYMLINK libspdk.so 00:01:28.852 CC app/trace_record/trace_record.o 00:01:28.852 CXX app/trace/trace.o 00:01:28.852 CC test/rpc_client/rpc_client_test.o 00:01:28.852 TEST_HEADER include/spdk/accel.h 00:01:28.852 CC app/spdk_lspci/spdk_lspci.o 00:01:28.852 TEST_HEADER include/spdk/accel_module.h 00:01:28.852 CC app/spdk_nvme_perf/perf.o 00:01:28.852 TEST_HEADER include/spdk/assert.h 00:01:28.852 CC app/spdk_nvme_identify/identify.o 00:01:28.852 CC app/spdk_nvme_discover/discovery_aer.o 00:01:28.852 CC app/spdk_top/spdk_top.o 00:01:28.852 TEST_HEADER include/spdk/barrier.h 00:01:28.852 TEST_HEADER include/spdk/base64.h 00:01:28.852 TEST_HEADER include/spdk/bdev.h 00:01:28.852 TEST_HEADER include/spdk/bdev_module.h 00:01:28.852 TEST_HEADER include/spdk/bdev_zone.h 00:01:28.852 TEST_HEADER include/spdk/bit_array.h 00:01:28.852 TEST_HEADER include/spdk/bit_pool.h 00:01:28.852 TEST_HEADER include/spdk/blob_bdev.h 00:01:28.852 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:28.852 TEST_HEADER include/spdk/blobfs.h 00:01:28.852 TEST_HEADER include/spdk/blob.h 00:01:28.852 TEST_HEADER include/spdk/conf.h 00:01:28.852 TEST_HEADER include/spdk/config.h 00:01:28.852 TEST_HEADER include/spdk/cpuset.h 00:01:28.852 TEST_HEADER include/spdk/crc16.h 00:01:28.852 TEST_HEADER include/spdk/crc32.h 00:01:28.852 TEST_HEADER include/spdk/crc64.h 00:01:28.852 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:28.852 TEST_HEADER include/spdk/dif.h 00:01:28.852 CC app/spdk_dd/spdk_dd.o 00:01:28.852 TEST_HEADER include/spdk/dma.h 00:01:28.852 TEST_HEADER include/spdk/endian.h 00:01:28.852 TEST_HEADER include/spdk/env_dpdk.h 00:01:28.852 TEST_HEADER include/spdk/env.h 00:01:28.852 TEST_HEADER include/spdk/event.h 00:01:28.852 CC app/iscsi_tgt/iscsi_tgt.o 00:01:28.852 CC app/vhost/vhost.o 00:01:28.852 TEST_HEADER include/spdk/fd_group.h 00:01:28.852 TEST_HEADER include/spdk/fd.h 00:01:28.852 TEST_HEADER include/spdk/file.h 00:01:28.852 CC app/nvmf_tgt/nvmf_main.o 00:01:28.852 TEST_HEADER include/spdk/ftl.h 00:01:28.852 TEST_HEADER include/spdk/gpt_spec.h 00:01:28.852 TEST_HEADER include/spdk/hexlify.h 00:01:28.852 TEST_HEADER include/spdk/histogram_data.h 00:01:28.852 TEST_HEADER include/spdk/idxd.h 00:01:28.852 TEST_HEADER include/spdk/idxd_spec.h 00:01:28.852 TEST_HEADER include/spdk/init.h 00:01:28.852 TEST_HEADER include/spdk/ioat.h 00:01:28.852 TEST_HEADER include/spdk/ioat_spec.h 00:01:28.852 TEST_HEADER include/spdk/iscsi_spec.h 00:01:28.852 CC examples/util/zipf/zipf.o 00:01:28.852 CC test/app/jsoncat/jsoncat.o 00:01:28.852 TEST_HEADER include/spdk/json.h 00:01:28.852 CC examples/vmd/led/led.o 00:01:28.852 CC examples/ioat/perf/perf.o 00:01:28.852 CC examples/vmd/lsvmd/lsvmd.o 00:01:28.852 TEST_HEADER include/spdk/jsonrpc.h 00:01:28.852 TEST_HEADER include/spdk/keyring.h 00:01:28.852 TEST_HEADER include/spdk/keyring_module.h 00:01:28.852 CC app/spdk_tgt/spdk_tgt.o 00:01:28.852 CC examples/accel/perf/accel_perf.o 00:01:28.852 CC examples/ioat/verify/verify.o 00:01:28.852 CC test/app/histogram_perf/histogram_perf.o 00:01:28.852 TEST_HEADER include/spdk/likely.h 00:01:28.852 CC test/event/event_perf/event_perf.o 00:01:28.852 TEST_HEADER include/spdk/log.h 00:01:28.852 CC test/thread/poller_perf/poller_perf.o 00:01:28.852 CC test/nvme/aer/aer.o 00:01:28.852 CC test/app/stub/stub.o 00:01:28.852 CC examples/nvme/hello_world/hello_world.o 00:01:28.852 TEST_HEADER include/spdk/lvol.h 00:01:28.852 CC test/env/vtophys/vtophys.o 00:01:28.852 CC app/fio/nvme/fio_plugin.o 00:01:28.852 TEST_HEADER include/spdk/memory.h 00:01:28.852 CC examples/nvme/reconnect/reconnect.o 00:01:28.852 TEST_HEADER include/spdk/mmio.h 00:01:28.852 CC examples/idxd/perf/perf.o 00:01:28.852 TEST_HEADER include/spdk/nbd.h 00:01:28.852 CC examples/sock/hello_world/hello_sock.o 00:01:28.852 TEST_HEADER include/spdk/notify.h 00:01:29.110 TEST_HEADER include/spdk/nvme.h 00:01:29.110 TEST_HEADER include/spdk/nvme_intel.h 00:01:29.110 CC test/event/reactor/reactor.o 00:01:29.110 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:29.110 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:29.110 TEST_HEADER include/spdk/nvme_spec.h 00:01:29.110 TEST_HEADER include/spdk/nvme_zns.h 00:01:29.110 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:29.110 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:29.110 TEST_HEADER include/spdk/nvmf.h 00:01:29.110 TEST_HEADER include/spdk/nvmf_spec.h 00:01:29.110 TEST_HEADER include/spdk/nvmf_transport.h 00:01:29.110 CC test/bdev/bdevio/bdevio.o 00:01:29.110 CC examples/blob/cli/blobcli.o 00:01:29.110 CC test/accel/dif/dif.o 00:01:29.110 TEST_HEADER include/spdk/opal.h 00:01:29.110 CC examples/bdev/hello_world/hello_bdev.o 00:01:29.110 CC examples/nvmf/nvmf/nvmf.o 00:01:29.110 TEST_HEADER include/spdk/opal_spec.h 00:01:29.110 CC test/dma/test_dma/test_dma.o 00:01:29.110 TEST_HEADER include/spdk/pci_ids.h 00:01:29.110 CC examples/bdev/bdevperf/bdevperf.o 00:01:29.110 CC test/blobfs/mkfs/mkfs.o 00:01:29.110 TEST_HEADER include/spdk/pipe.h 00:01:29.110 CC examples/blob/hello_world/hello_blob.o 00:01:29.110 CC examples/thread/thread/thread_ex.o 00:01:29.110 CC test/app/bdev_svc/bdev_svc.o 00:01:29.110 TEST_HEADER include/spdk/queue.h 00:01:29.110 TEST_HEADER include/spdk/reduce.h 00:01:29.110 TEST_HEADER include/spdk/rpc.h 00:01:29.110 TEST_HEADER include/spdk/scheduler.h 00:01:29.110 TEST_HEADER include/spdk/scsi.h 00:01:29.110 TEST_HEADER include/spdk/scsi_spec.h 00:01:29.110 TEST_HEADER include/spdk/sock.h 00:01:29.110 TEST_HEADER include/spdk/stdinc.h 00:01:29.110 TEST_HEADER include/spdk/string.h 00:01:29.110 TEST_HEADER include/spdk/thread.h 00:01:29.110 TEST_HEADER include/spdk/trace.h 00:01:29.110 TEST_HEADER include/spdk/trace_parser.h 00:01:29.110 TEST_HEADER include/spdk/tree.h 00:01:29.110 TEST_HEADER include/spdk/ublk.h 00:01:29.111 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:29.111 LINK spdk_lspci 00:01:29.111 TEST_HEADER include/spdk/util.h 00:01:29.111 TEST_HEADER include/spdk/uuid.h 00:01:29.111 TEST_HEADER include/spdk/version.h 00:01:29.111 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:29.111 CC test/env/mem_callbacks/mem_callbacks.o 00:01:29.111 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:29.111 TEST_HEADER include/spdk/vhost.h 00:01:29.111 TEST_HEADER include/spdk/vmd.h 00:01:29.111 TEST_HEADER include/spdk/xor.h 00:01:29.111 TEST_HEADER include/spdk/zipf.h 00:01:29.111 CXX test/cpp_headers/accel.o 00:01:29.111 CC test/lvol/esnap/esnap.o 00:01:29.111 LINK rpc_client_test 00:01:29.111 LINK lsvmd 00:01:29.111 LINK jsoncat 00:01:29.396 LINK spdk_nvme_discover 00:01:29.396 LINK interrupt_tgt 00:01:29.396 LINK led 00:01:29.396 LINK poller_perf 00:01:29.396 LINK vtophys 00:01:29.396 LINK histogram_perf 00:01:29.396 LINK event_perf 00:01:29.396 LINK zipf 00:01:29.396 LINK vhost 00:01:29.396 LINK nvmf_tgt 00:01:29.396 LINK reactor 00:01:29.396 LINK spdk_trace_record 00:01:29.396 LINK iscsi_tgt 00:01:29.396 LINK stub 00:01:29.396 LINK spdk_tgt 00:01:29.396 LINK ioat_perf 00:01:29.396 LINK verify 00:01:29.396 LINK bdev_svc 00:01:29.396 CXX test/cpp_headers/accel_module.o 00:01:29.396 LINK hello_world 00:01:29.396 LINK mkfs 00:01:29.396 LINK hello_sock 00:01:29.396 LINK hello_blob 00:01:29.663 LINK hello_bdev 00:01:29.663 LINK aer 00:01:29.663 LINK thread 00:01:29.663 LINK spdk_dd 00:01:29.663 CXX test/cpp_headers/assert.o 00:01:29.663 LINK nvmf 00:01:29.663 LINK idxd_perf 00:01:29.663 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:29.663 CXX test/cpp_headers/barrier.o 00:01:29.663 LINK reconnect 00:01:29.663 LINK spdk_trace 00:01:29.663 CXX test/cpp_headers/base64.o 00:01:29.663 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:29.663 CC test/env/memory/memory_ut.o 00:01:29.663 LINK bdevio 00:01:29.663 CC examples/nvme/arbitration/arbitration.o 00:01:29.664 CC test/nvme/reset/reset.o 00:01:29.664 CC test/event/reactor_perf/reactor_perf.o 00:01:29.664 CC app/fio/bdev/fio_plugin.o 00:01:29.664 LINK dif 00:01:29.664 CC test/nvme/sgl/sgl.o 00:01:29.664 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:29.927 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:29.927 LINK test_dma 00:01:29.927 CXX test/cpp_headers/bdev.o 00:01:29.927 CC test/event/app_repeat/app_repeat.o 00:01:29.927 CC test/event/scheduler/scheduler.o 00:01:29.927 CXX test/cpp_headers/bdev_module.o 00:01:29.927 LINK accel_perf 00:01:29.927 CC test/env/pci/pci_ut.o 00:01:29.927 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:29.927 CC test/nvme/e2edp/nvme_dp.o 00:01:29.927 CC test/nvme/overhead/overhead.o 00:01:29.927 CC test/nvme/err_injection/err_injection.o 00:01:29.927 CC test/nvme/startup/startup.o 00:01:29.927 CC examples/nvme/hotplug/hotplug.o 00:01:29.927 LINK nvme_fuzz 00:01:29.927 CXX test/cpp_headers/bdev_zone.o 00:01:29.927 CXX test/cpp_headers/bit_array.o 00:01:29.927 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:29.927 CC test/nvme/reserve/reserve.o 00:01:29.927 LINK blobcli 00:01:29.927 LINK env_dpdk_post_init 00:01:29.927 LINK spdk_nvme 00:01:29.927 LINK reactor_perf 00:01:29.927 CXX test/cpp_headers/bit_pool.o 00:01:30.192 CC examples/nvme/abort/abort.o 00:01:30.192 CC test/nvme/connect_stress/connect_stress.o 00:01:30.192 CC test/nvme/simple_copy/simple_copy.o 00:01:30.192 LINK app_repeat 00:01:30.192 CC test/nvme/boot_partition/boot_partition.o 00:01:30.192 CXX test/cpp_headers/blob_bdev.o 00:01:30.192 CC test/nvme/compliance/nvme_compliance.o 00:01:30.192 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:30.192 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:30.192 CXX test/cpp_headers/blobfs_bdev.o 00:01:30.192 CC test/nvme/fused_ordering/fused_ordering.o 00:01:30.192 CC test/nvme/fdp/fdp.o 00:01:30.192 CXX test/cpp_headers/blobfs.o 00:01:30.192 CXX test/cpp_headers/blob.o 00:01:30.192 CXX test/cpp_headers/conf.o 00:01:30.192 CC test/nvme/cuse/cuse.o 00:01:30.192 CXX test/cpp_headers/config.o 00:01:30.192 CXX test/cpp_headers/cpuset.o 00:01:30.192 CXX test/cpp_headers/crc16.o 00:01:30.192 LINK mem_callbacks 00:01:30.192 CXX test/cpp_headers/crc32.o 00:01:30.192 LINK startup 00:01:30.192 LINK reset 00:01:30.192 CXX test/cpp_headers/crc64.o 00:01:30.192 LINK err_injection 00:01:30.192 LINK scheduler 00:01:30.192 CXX test/cpp_headers/dif.o 00:01:30.455 LINK sgl 00:01:30.455 CXX test/cpp_headers/dma.o 00:01:30.455 LINK cmb_copy 00:01:30.455 LINK arbitration 00:01:30.455 CXX test/cpp_headers/endian.o 00:01:30.455 LINK spdk_nvme_perf 00:01:30.455 CXX test/cpp_headers/env_dpdk.o 00:01:30.455 LINK nvme_dp 00:01:30.455 LINK hotplug 00:01:30.455 CXX test/cpp_headers/env.o 00:01:30.455 LINK reserve 00:01:30.455 LINK connect_stress 00:01:30.455 LINK spdk_nvme_identify 00:01:30.455 LINK overhead 00:01:30.455 CXX test/cpp_headers/event.o 00:01:30.455 LINK boot_partition 00:01:30.455 CXX test/cpp_headers/fd_group.o 00:01:30.455 LINK spdk_top 00:01:30.455 LINK pmr_persistence 00:01:30.455 LINK bdevperf 00:01:30.455 CXX test/cpp_headers/fd.o 00:01:30.455 LINK doorbell_aers 00:01:30.455 LINK pci_ut 00:01:30.455 LINK simple_copy 00:01:30.455 CXX test/cpp_headers/file.o 00:01:30.455 CXX test/cpp_headers/ftl.o 00:01:30.716 CXX test/cpp_headers/gpt_spec.o 00:01:30.716 CXX test/cpp_headers/hexlify.o 00:01:30.716 CXX test/cpp_headers/histogram_data.o 00:01:30.716 LINK fused_ordering 00:01:30.716 CXX test/cpp_headers/idxd.o 00:01:30.716 CXX test/cpp_headers/idxd_spec.o 00:01:30.716 CXX test/cpp_headers/init.o 00:01:30.716 CXX test/cpp_headers/ioat.o 00:01:30.716 LINK vhost_fuzz 00:01:30.716 LINK nvme_manage 00:01:30.716 CXX test/cpp_headers/ioat_spec.o 00:01:30.716 CXX test/cpp_headers/iscsi_spec.o 00:01:30.716 CXX test/cpp_headers/json.o 00:01:30.716 CXX test/cpp_headers/jsonrpc.o 00:01:30.716 CXX test/cpp_headers/keyring_module.o 00:01:30.716 CXX test/cpp_headers/likely.o 00:01:30.716 CXX test/cpp_headers/keyring.o 00:01:30.716 LINK spdk_bdev 00:01:30.716 CXX test/cpp_headers/log.o 00:01:30.716 CXX test/cpp_headers/lvol.o 00:01:30.716 CXX test/cpp_headers/memory.o 00:01:30.716 CXX test/cpp_headers/mmio.o 00:01:30.716 LINK abort 00:01:30.716 CXX test/cpp_headers/nbd.o 00:01:30.716 CXX test/cpp_headers/notify.o 00:01:30.716 CXX test/cpp_headers/nvme.o 00:01:30.716 LINK nvme_compliance 00:01:30.716 CXX test/cpp_headers/nvme_ocssd.o 00:01:30.716 CXX test/cpp_headers/nvme_intel.o 00:01:30.716 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:30.716 CXX test/cpp_headers/nvme_spec.o 00:01:30.716 CXX test/cpp_headers/nvme_zns.o 00:01:30.716 CXX test/cpp_headers/nvmf_cmd.o 00:01:30.979 LINK fdp 00:01:30.979 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:30.979 CXX test/cpp_headers/nvmf.o 00:01:30.979 CXX test/cpp_headers/nvmf_spec.o 00:01:30.979 CXX test/cpp_headers/nvmf_transport.o 00:01:30.979 CXX test/cpp_headers/opal.o 00:01:30.979 CXX test/cpp_headers/opal_spec.o 00:01:30.979 CXX test/cpp_headers/pci_ids.o 00:01:30.979 CXX test/cpp_headers/pipe.o 00:01:30.979 CXX test/cpp_headers/queue.o 00:01:30.979 CXX test/cpp_headers/reduce.o 00:01:30.979 CXX test/cpp_headers/rpc.o 00:01:30.979 CXX test/cpp_headers/scheduler.o 00:01:30.980 CXX test/cpp_headers/scsi.o 00:01:30.980 CXX test/cpp_headers/scsi_spec.o 00:01:30.980 CXX test/cpp_headers/sock.o 00:01:30.980 CXX test/cpp_headers/stdinc.o 00:01:30.980 CXX test/cpp_headers/string.o 00:01:30.980 CXX test/cpp_headers/thread.o 00:01:30.980 CXX test/cpp_headers/trace.o 00:01:30.980 CXX test/cpp_headers/trace_parser.o 00:01:30.980 CXX test/cpp_headers/tree.o 00:01:30.980 CXX test/cpp_headers/ublk.o 00:01:30.980 CXX test/cpp_headers/util.o 00:01:30.980 CXX test/cpp_headers/uuid.o 00:01:30.980 CXX test/cpp_headers/version.o 00:01:30.980 CXX test/cpp_headers/vfio_user_pci.o 00:01:30.980 CXX test/cpp_headers/vfio_user_spec.o 00:01:30.980 CXX test/cpp_headers/vhost.o 00:01:30.980 CXX test/cpp_headers/vmd.o 00:01:31.237 CXX test/cpp_headers/xor.o 00:01:31.237 CXX test/cpp_headers/zipf.o 00:01:31.495 LINK memory_ut 00:01:31.753 LINK cuse 00:01:32.319 LINK iscsi_fuzz 00:01:34.855 LINK esnap 00:01:34.855 00:01:34.855 real 0m47.602s 00:01:34.855 user 10m1.668s 00:01:34.855 sys 2m26.465s 00:01:34.855 10:41:51 make -- common/autotest_common.sh@1122 -- $ xtrace_disable 00:01:34.855 10:41:51 make -- common/autotest_common.sh@10 -- $ set +x 00:01:34.855 ************************************ 00:01:34.855 END TEST make 00:01:34.855 ************************************ 00:01:34.855 10:41:51 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:34.855 10:41:51 -- pm/common@29 -- $ signal_monitor_resources TERM 00:01:34.855 10:41:51 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:01:34.855 10:41:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.855 10:41:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:34.855 10:41:51 -- pm/common@44 -- $ pid=2579687 00:01:34.855 10:41:51 -- pm/common@50 -- $ kill -TERM 2579687 00:01:34.855 10:41:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.855 10:41:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:34.855 10:41:51 -- pm/common@44 -- $ pid=2579689 00:01:34.855 10:41:51 -- pm/common@50 -- $ kill -TERM 2579689 00:01:34.856 10:41:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.856 10:41:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:34.856 10:41:51 -- pm/common@44 -- $ pid=2579691 00:01:34.856 10:41:51 -- pm/common@50 -- $ kill -TERM 2579691 00:01:34.856 10:41:51 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.856 10:41:51 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:34.856 10:41:51 -- pm/common@44 -- $ pid=2579727 00:01:34.856 10:41:51 -- pm/common@50 -- $ sudo -E kill -TERM 2579727 00:01:35.114 10:41:51 -- spdk/autotest.sh@34 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:01:35.114 10:41:51 -- nvmf/common.sh@7 -- # uname -s 00:01:35.114 10:41:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:35.114 10:41:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:35.114 10:41:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:35.114 10:41:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:35.114 10:41:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:35.114 10:41:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:35.114 10:41:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:35.114 10:41:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:35.114 10:41:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:35.114 10:41:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:35.114 10:41:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:01:35.114 10:41:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:01:35.114 10:41:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:35.114 10:41:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:35.114 10:41:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:01:35.114 10:41:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:35.114 10:41:51 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:35.114 10:41:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:35.114 10:41:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:35.114 10:41:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:35.114 10:41:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.114 10:41:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.114 10:41:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.114 10:41:51 -- paths/export.sh@5 -- # export PATH 00:01:35.114 10:41:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:35.114 10:41:51 -- nvmf/common.sh@47 -- # : 0 00:01:35.114 10:41:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:35.114 10:41:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:35.114 10:41:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:35.114 10:41:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:35.114 10:41:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:35.114 10:41:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:35.114 10:41:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:35.114 10:41:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:35.114 10:41:51 -- spdk/autotest.sh@36 -- # '[' 0 -ne 0 ']' 00:01:35.114 10:41:51 -- spdk/autotest.sh@41 -- # uname -s 00:01:35.114 10:41:51 -- spdk/autotest.sh@41 -- # '[' Linux = Linux ']' 00:01:35.114 10:41:51 -- spdk/autotest.sh@42 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:35.114 10:41:51 -- spdk/autotest.sh@43 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:35.114 10:41:51 -- spdk/autotest.sh@48 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:35.114 10:41:51 -- spdk/autotest.sh@49 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:01:35.114 10:41:51 -- spdk/autotest.sh@53 -- # modprobe nbd 00:01:35.114 10:41:51 -- spdk/autotest.sh@55 -- # type -P udevadm 00:01:35.114 10:41:51 -- spdk/autotest.sh@55 -- # udevadm=/usr/sbin/udevadm 00:01:35.114 10:41:51 -- spdk/autotest.sh@57 -- # udevadm_pid=2634399 00:01:35.114 10:41:51 -- spdk/autotest.sh@56 -- # /usr/sbin/udevadm monitor --property 00:01:35.114 10:41:51 -- spdk/autotest.sh@62 -- # start_monitor_resources 00:01:35.114 10:41:51 -- pm/common@17 -- # local monitor 00:01:35.114 10:41:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.114 10:41:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.114 10:41:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.114 10:41:51 -- pm/common@21 -- # date +%s 00:01:35.114 10:41:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:35.114 10:41:51 -- pm/common@21 -- # date +%s 00:01:35.114 10:41:51 -- pm/common@25 -- # sleep 1 00:01:35.114 10:41:51 -- pm/common@21 -- # date +%s 00:01:35.114 10:41:51 -- pm/common@21 -- # date +%s 00:01:35.114 10:41:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715762511 00:01:35.114 10:41:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715762511 00:01:35.114 10:41:51 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715762511 00:01:35.114 10:41:51 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715762511 00:01:35.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715762511_collect-vmstat.pm.log 00:01:35.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715762511_collect-cpu-load.pm.log 00:01:35.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715762511_collect-cpu-temp.pm.log 00:01:35.114 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715762511_collect-bmc-pm.bmc.pm.log 00:01:36.048 10:41:52 -- spdk/autotest.sh@64 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:36.048 10:41:52 -- spdk/autotest.sh@66 -- # timing_enter autotest 00:01:36.048 10:41:52 -- common/autotest_common.sh@720 -- # xtrace_disable 00:01:36.048 10:41:52 -- common/autotest_common.sh@10 -- # set +x 00:01:36.048 10:41:52 -- spdk/autotest.sh@68 -- # create_test_list 00:01:36.048 10:41:52 -- common/autotest_common.sh@744 -- # xtrace_disable 00:01:36.048 10:41:52 -- common/autotest_common.sh@10 -- # set +x 00:01:36.048 10:41:52 -- spdk/autotest.sh@70 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:01:36.048 10:41:52 -- spdk/autotest.sh@70 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.048 10:41:52 -- spdk/autotest.sh@70 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.048 10:41:52 -- spdk/autotest.sh@71 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:36.048 10:41:52 -- spdk/autotest.sh@72 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:36.048 10:41:52 -- spdk/autotest.sh@74 -- # freebsd_update_contigmem_mod 00:01:36.048 10:41:52 -- common/autotest_common.sh@1451 -- # uname 00:01:36.048 10:41:52 -- common/autotest_common.sh@1451 -- # '[' Linux = FreeBSD ']' 00:01:36.048 10:41:52 -- spdk/autotest.sh@75 -- # freebsd_set_maxsock_buf 00:01:36.048 10:41:52 -- common/autotest_common.sh@1471 -- # uname 00:01:36.048 10:41:52 -- common/autotest_common.sh@1471 -- # [[ Linux = FreeBSD ]] 00:01:36.048 10:41:52 -- spdk/autotest.sh@80 -- # grep CC_TYPE mk/cc.mk 00:01:36.048 10:41:52 -- spdk/autotest.sh@80 -- # CC_TYPE=CC_TYPE=gcc 00:01:36.048 10:41:52 -- spdk/autotest.sh@81 -- # hash lcov 00:01:36.048 10:41:52 -- spdk/autotest.sh@81 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:36.048 10:41:52 -- spdk/autotest.sh@89 -- # export 'LCOV_OPTS= 00:01:36.048 --rc lcov_branch_coverage=1 00:01:36.048 --rc lcov_function_coverage=1 00:01:36.048 --rc genhtml_branch_coverage=1 00:01:36.048 --rc genhtml_function_coverage=1 00:01:36.048 --rc genhtml_legend=1 00:01:36.048 --rc geninfo_all_blocks=1 00:01:36.048 ' 00:01:36.048 10:41:52 -- spdk/autotest.sh@89 -- # LCOV_OPTS=' 00:01:36.048 --rc lcov_branch_coverage=1 00:01:36.048 --rc lcov_function_coverage=1 00:01:36.048 --rc genhtml_branch_coverage=1 00:01:36.048 --rc genhtml_function_coverage=1 00:01:36.048 --rc genhtml_legend=1 00:01:36.048 --rc geninfo_all_blocks=1 00:01:36.048 ' 00:01:36.048 10:41:52 -- spdk/autotest.sh@90 -- # export 'LCOV=lcov 00:01:36.048 --rc lcov_branch_coverage=1 00:01:36.048 --rc lcov_function_coverage=1 00:01:36.048 --rc genhtml_branch_coverage=1 00:01:36.048 --rc genhtml_function_coverage=1 00:01:36.048 --rc genhtml_legend=1 00:01:36.048 --rc geninfo_all_blocks=1 00:01:36.048 --no-external' 00:01:36.048 10:41:52 -- spdk/autotest.sh@90 -- # LCOV='lcov 00:01:36.048 --rc lcov_branch_coverage=1 00:01:36.048 --rc lcov_function_coverage=1 00:01:36.048 --rc genhtml_branch_coverage=1 00:01:36.048 --rc genhtml_function_coverage=1 00:01:36.048 --rc genhtml_legend=1 00:01:36.048 --rc geninfo_all_blocks=1 00:01:36.048 --no-external' 00:01:36.048 10:41:52 -- spdk/autotest.sh@92 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:36.307 lcov: LCOV version 1.14 00:01:36.307 10:41:52 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:01:51.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:51.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:51.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:51.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:51.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:51.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:01:51.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:51.173 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:09.312 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:09.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:09.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:09.313 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:10.246 10:42:26 -- spdk/autotest.sh@98 -- # timing_enter pre_cleanup 00:02:10.246 10:42:26 -- common/autotest_common.sh@720 -- # xtrace_disable 00:02:10.246 10:42:26 -- common/autotest_common.sh@10 -- # set +x 00:02:10.246 10:42:26 -- spdk/autotest.sh@100 -- # rm -f 00:02:10.246 10:42:26 -- spdk/autotest.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:11.621 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:11.621 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:11.621 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:11.621 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:11.621 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:11.621 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:11.621 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:11.621 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:11.879 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:11.879 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:11.879 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:11.879 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:11.879 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:11.879 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:11.879 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:11.879 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:11.879 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:11.879 10:42:28 -- spdk/autotest.sh@105 -- # get_zoned_devs 00:02:11.879 10:42:28 -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:11.879 10:42:28 -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:11.879 10:42:28 -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:11.879 10:42:28 -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:11.879 10:42:28 -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:11.879 10:42:28 -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:11.879 10:42:28 -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:11.879 10:42:28 -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:11.879 10:42:28 -- spdk/autotest.sh@107 -- # (( 0 > 0 )) 00:02:11.879 10:42:28 -- spdk/autotest.sh@119 -- # for dev in /dev/nvme*n!(*p*) 00:02:11.879 10:42:28 -- spdk/autotest.sh@121 -- # [[ -z '' ]] 00:02:11.879 10:42:28 -- spdk/autotest.sh@122 -- # block_in_use /dev/nvme0n1 00:02:11.879 10:42:28 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:11.879 10:42:28 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:11.879 No valid GPT data, bailing 00:02:11.879 10:42:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:11.879 10:42:28 -- scripts/common.sh@391 -- # pt= 00:02:11.879 10:42:28 -- scripts/common.sh@392 -- # return 1 00:02:11.879 10:42:28 -- spdk/autotest.sh@123 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:11.879 1+0 records in 00:02:11.879 1+0 records out 00:02:11.879 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00234052 s, 448 MB/s 00:02:11.879 10:42:28 -- spdk/autotest.sh@127 -- # sync 00:02:11.879 10:42:28 -- spdk/autotest.sh@129 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:11.879 10:42:28 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:11.879 10:42:28 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:13.781 10:42:29 -- spdk/autotest.sh@133 -- # uname -s 00:02:13.781 10:42:29 -- spdk/autotest.sh@133 -- # '[' Linux = Linux ']' 00:02:13.781 10:42:29 -- spdk/autotest.sh@134 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:13.781 10:42:29 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:13.781 10:42:29 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:13.781 10:42:29 -- common/autotest_common.sh@10 -- # set +x 00:02:13.781 ************************************ 00:02:13.781 START TEST setup.sh 00:02:13.781 ************************************ 00:02:13.781 10:42:29 setup.sh -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:13.781 * Looking for test storage... 00:02:13.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:13.781 10:42:29 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:13.781 10:42:29 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:13.781 10:42:29 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:13.781 10:42:29 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:13.781 10:42:29 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:13.781 10:42:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:13.781 ************************************ 00:02:13.781 START TEST acl 00:02:13.781 ************************************ 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:13.781 * Looking for test storage... 00:02:13.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:13.781 10:42:29 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:13.781 10:42:29 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:13.781 10:42:29 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:13.781 10:42:29 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:13.781 10:42:29 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:13.781 10:42:29 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:13.781 10:42:29 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:13.781 10:42:29 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:13.782 10:42:29 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:15.683 10:42:31 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:15.683 10:42:31 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:15.683 10:42:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:15.683 10:42:31 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:15.683 10:42:31 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:15.683 10:42:31 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:17.061 Hugepages 00:02:17.061 node hugesize free / total 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 00:02:17.061 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.061 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:17.062 10:42:32 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:33 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:17.062 10:42:33 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:17.062 10:42:33 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:17.062 10:42:33 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:17.062 10:42:33 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:17.062 10:42:33 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:17.062 10:42:33 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:17.062 10:42:33 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:17.062 10:42:33 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:17.062 10:42:33 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:17.062 10:42:33 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:17.062 ************************************ 00:02:17.062 START TEST denied 00:02:17.062 ************************************ 00:02:17.062 10:42:33 setup.sh.acl.denied -- common/autotest_common.sh@1121 -- # denied 00:02:17.062 10:42:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:17.062 10:42:33 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:17.062 10:42:33 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:17.062 10:42:33 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:17.062 10:42:33 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:18.436 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:18.436 10:42:34 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:20.966 00:02:20.966 real 0m4.120s 00:02:20.966 user 0m1.227s 00:02:20.966 sys 0m2.074s 00:02:20.966 10:42:37 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:20.966 10:42:37 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:20.966 ************************************ 00:02:20.966 END TEST denied 00:02:20.966 ************************************ 00:02:20.966 10:42:37 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:20.966 10:42:37 setup.sh.acl -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:20.966 10:42:37 setup.sh.acl -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:20.966 10:42:37 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:21.278 ************************************ 00:02:21.278 START TEST allowed 00:02:21.278 ************************************ 00:02:21.278 10:42:37 setup.sh.acl.allowed -- common/autotest_common.sh@1121 -- # allowed 00:02:21.278 10:42:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:21.278 10:42:37 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:21.278 10:42:37 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:21.278 10:42:37 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:21.278 10:42:37 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:23.808 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:23.808 10:42:39 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:23.808 10:42:39 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:23.808 10:42:39 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:23.808 10:42:39 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:23.808 10:42:39 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:25.181 00:02:25.181 real 0m4.174s 00:02:25.181 user 0m1.153s 00:02:25.181 sys 0m1.947s 00:02:25.181 10:42:41 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:25.181 10:42:41 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:02:25.181 ************************************ 00:02:25.181 END TEST allowed 00:02:25.181 ************************************ 00:02:25.441 00:02:25.441 real 0m11.497s 00:02:25.441 user 0m3.652s 00:02:25.441 sys 0m6.043s 00:02:25.441 10:42:41 setup.sh.acl -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:25.441 10:42:41 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:25.441 ************************************ 00:02:25.441 END TEST acl 00:02:25.441 ************************************ 00:02:25.441 10:42:41 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:25.441 10:42:41 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:25.441 10:42:41 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:25.441 10:42:41 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:25.441 ************************************ 00:02:25.441 START TEST hugepages 00:02:25.441 ************************************ 00:02:25.441 10:42:41 setup.sh.hugepages -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:25.442 * Looking for test storage... 00:02:25.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 35499036 kB' 'MemAvailable: 40238924 kB' 'Buffers: 2696 kB' 'Cached: 18386948 kB' 'SwapCached: 0 kB' 'Active: 14381292 kB' 'Inactive: 4476780 kB' 'Active(anon): 13746160 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 471824 kB' 'Mapped: 225612 kB' 'Shmem: 13277732 kB' 'KReclaimable: 243200 kB' 'Slab: 637824 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394624 kB' 'KernelStack: 13184 kB' 'PageTables: 9284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 14872280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198940 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.442 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:25.443 10:42:41 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:25.443 10:42:41 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:25.443 10:42:41 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:25.443 10:42:41 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:25.443 ************************************ 00:02:25.443 START TEST default_setup 00:02:25.443 ************************************ 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1121 -- # default_setup 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:02:25.443 10:42:41 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:26.853 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:26.853 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:26.853 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:26.853 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:26.853 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:26.853 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:26.853 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:26.853 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:26.853 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:26.853 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:26.853 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:26.853 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:26.853 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:26.853 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:26.853 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:26.853 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:27.792 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37511324 kB' 'MemAvailable: 42251212 kB' 'Buffers: 2696 kB' 'Cached: 18387048 kB' 'SwapCached: 0 kB' 'Active: 14406632 kB' 'Inactive: 4476780 kB' 'Active(anon): 13771500 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496912 kB' 'Mapped: 226364 kB' 'Shmem: 13277832 kB' 'KReclaimable: 243200 kB' 'Slab: 637332 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394132 kB' 'KernelStack: 13264 kB' 'PageTables: 9584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14901332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199024 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.792 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.793 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37517072 kB' 'MemAvailable: 42256960 kB' 'Buffers: 2696 kB' 'Cached: 18387048 kB' 'SwapCached: 0 kB' 'Active: 14406716 kB' 'Inactive: 4476780 kB' 'Active(anon): 13771584 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497104 kB' 'Mapped: 226440 kB' 'Shmem: 13277832 kB' 'KReclaimable: 243200 kB' 'Slab: 637396 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394196 kB' 'KernelStack: 13040 kB' 'PageTables: 9124 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14901352 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198976 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.794 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:27.795 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:27.796 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:27.796 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:27.796 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37520232 kB' 'MemAvailable: 42260120 kB' 'Buffers: 2696 kB' 'Cached: 18387068 kB' 'SwapCached: 0 kB' 'Active: 14402724 kB' 'Inactive: 4476780 kB' 'Active(anon): 13767592 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493008 kB' 'Mapped: 226416 kB' 'Shmem: 13277852 kB' 'KReclaimable: 243200 kB' 'Slab: 637516 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394316 kB' 'KernelStack: 12960 kB' 'PageTables: 8952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14898456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.058 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.059 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:28.060 nr_hugepages=1024 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:28.060 resv_hugepages=0 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:28.060 surplus_hugepages=0 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:28.060 anon_hugepages=0 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37520228 kB' 'MemAvailable: 42260116 kB' 'Buffers: 2696 kB' 'Cached: 18387088 kB' 'SwapCached: 0 kB' 'Active: 14405876 kB' 'Inactive: 4476780 kB' 'Active(anon): 13770744 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496096 kB' 'Mapped: 226336 kB' 'Shmem: 13277872 kB' 'KReclaimable: 243200 kB' 'Slab: 637452 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394252 kB' 'KernelStack: 13040 kB' 'PageTables: 9020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14901392 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198992 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.060 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.061 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21515812 kB' 'MemUsed: 11314072 kB' 'SwapCached: 0 kB' 'Active: 7748480 kB' 'Inactive: 342492 kB' 'Active(anon): 7318088 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 342492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855144 kB' 'Mapped: 81436 kB' 'AnonPages: 238968 kB' 'Shmem: 7082260 kB' 'KernelStack: 8184 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120780 kB' 'Slab: 334484 kB' 'SReclaimable: 120780 kB' 'SUnreclaim: 213704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.062 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.063 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:28.064 node0=1024 expecting 1024 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:28.064 00:02:28.064 real 0m2.499s 00:02:28.064 user 0m0.629s 00:02:28.064 sys 0m0.854s 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:28.064 10:42:44 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:02:28.064 ************************************ 00:02:28.064 END TEST default_setup 00:02:28.064 ************************************ 00:02:28.064 10:42:44 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:28.064 10:42:44 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:28.064 10:42:44 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:28.064 10:42:44 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:28.064 ************************************ 00:02:28.064 START TEST per_node_1G_alloc 00:02:28.064 ************************************ 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1121 -- # per_node_1G_alloc 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:28.064 10:42:44 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:29.444 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:29.444 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:29.444 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:29.444 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:29.444 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:29.444 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:29.444 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:29.444 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:29.444 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:29.444 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:29.444 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:29.444 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:29.444 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:29.444 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:29.444 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:29.444 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:29.444 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:29.444 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:29.444 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:29.444 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:29.444 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37524844 kB' 'MemAvailable: 42264732 kB' 'Buffers: 2696 kB' 'Cached: 18387160 kB' 'SwapCached: 0 kB' 'Active: 14400408 kB' 'Inactive: 4476780 kB' 'Active(anon): 13765276 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490484 kB' 'Mapped: 225716 kB' 'Shmem: 13277944 kB' 'KReclaimable: 243200 kB' 'Slab: 637300 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394100 kB' 'KernelStack: 12992 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14895324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199148 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.445 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:29.446 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37531636 kB' 'MemAvailable: 42271524 kB' 'Buffers: 2696 kB' 'Cached: 18387160 kB' 'SwapCached: 0 kB' 'Active: 14401020 kB' 'Inactive: 4476780 kB' 'Active(anon): 13765888 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491116 kB' 'Mapped: 225716 kB' 'Shmem: 13277944 kB' 'KReclaimable: 243200 kB' 'Slab: 637272 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394072 kB' 'KernelStack: 13008 kB' 'PageTables: 8820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14895344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199116 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.447 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.448 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37531140 kB' 'MemAvailable: 42271028 kB' 'Buffers: 2696 kB' 'Cached: 18387176 kB' 'SwapCached: 0 kB' 'Active: 14400212 kB' 'Inactive: 4476780 kB' 'Active(anon): 13765080 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490272 kB' 'Mapped: 225692 kB' 'Shmem: 13277960 kB' 'KReclaimable: 243200 kB' 'Slab: 637344 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394144 kB' 'KernelStack: 13024 kB' 'PageTables: 8864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14895364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199116 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.449 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.450 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:29.451 nr_hugepages=1024 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:29.451 resv_hugepages=0 00:02:29.451 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:29.452 surplus_hugepages=0 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:29.452 anon_hugepages=0 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37530536 kB' 'MemAvailable: 42270424 kB' 'Buffers: 2696 kB' 'Cached: 18387204 kB' 'SwapCached: 0 kB' 'Active: 14400268 kB' 'Inactive: 4476780 kB' 'Active(anon): 13765136 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490336 kB' 'Mapped: 225692 kB' 'Shmem: 13277988 kB' 'KReclaimable: 243200 kB' 'Slab: 637344 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394144 kB' 'KernelStack: 13056 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14895388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199100 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.452 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.453 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.714 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22572520 kB' 'MemUsed: 10257364 kB' 'SwapCached: 0 kB' 'Active: 7748036 kB' 'Inactive: 342492 kB' 'Active(anon): 7317644 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 342492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855260 kB' 'Mapped: 81296 kB' 'AnonPages: 238372 kB' 'Shmem: 7082376 kB' 'KernelStack: 8168 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120780 kB' 'Slab: 334536 kB' 'SReclaimable: 120780 kB' 'SUnreclaim: 213756 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.715 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14961140 kB' 'MemUsed: 12750704 kB' 'SwapCached: 0 kB' 'Active: 6652268 kB' 'Inactive: 4134288 kB' 'Active(anon): 6447528 kB' 'Inactive(anon): 0 kB' 'Active(file): 204740 kB' 'Inactive(file): 4134288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10534664 kB' 'Mapped: 144396 kB' 'AnonPages: 251964 kB' 'Shmem: 6195636 kB' 'KernelStack: 4888 kB' 'PageTables: 4692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122420 kB' 'Slab: 302808 kB' 'SReclaimable: 122420 kB' 'SUnreclaim: 180388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.716 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:29.717 node0=512 expecting 512 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:29.717 node1=512 expecting 512 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:29.717 00:02:29.717 real 0m1.577s 00:02:29.717 user 0m0.641s 00:02:29.717 sys 0m0.902s 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:29.717 10:42:45 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:29.717 ************************************ 00:02:29.717 END TEST per_node_1G_alloc 00:02:29.717 ************************************ 00:02:29.717 10:42:45 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:29.717 10:42:45 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:29.717 10:42:45 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:29.717 10:42:45 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:29.717 ************************************ 00:02:29.717 START TEST even_2G_alloc 00:02:29.717 ************************************ 00:02:29.717 10:42:45 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1121 -- # even_2G_alloc 00:02:29.717 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:29.717 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:29.717 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:29.717 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:29.717 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:29.717 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:29.717 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:29.718 10:42:45 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:31.099 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:31.099 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:31.099 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:31.099 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:31.099 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:31.099 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:31.099 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:31.099 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:31.099 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:31.099 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:31.099 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:31.099 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:31.099 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:31.099 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:31.099 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:31.099 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:31.099 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:31.099 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:31.099 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:02:31.099 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:31.099 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:31.099 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:31.099 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:31.099 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37544480 kB' 'MemAvailable: 42284368 kB' 'Buffers: 2696 kB' 'Cached: 18387300 kB' 'SwapCached: 0 kB' 'Active: 14400676 kB' 'Inactive: 4476780 kB' 'Active(anon): 13765544 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490580 kB' 'Mapped: 225720 kB' 'Shmem: 13278084 kB' 'KReclaimable: 243200 kB' 'Slab: 637208 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394008 kB' 'KernelStack: 13024 kB' 'PageTables: 8852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14895564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199100 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.100 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37568816 kB' 'MemAvailable: 42308704 kB' 'Buffers: 2696 kB' 'Cached: 18387300 kB' 'SwapCached: 0 kB' 'Active: 14400852 kB' 'Inactive: 4476780 kB' 'Active(anon): 13765720 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 490772 kB' 'Mapped: 225720 kB' 'Shmem: 13278084 kB' 'KReclaimable: 243200 kB' 'Slab: 637200 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394000 kB' 'KernelStack: 13040 kB' 'PageTables: 8792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14895584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199100 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.101 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.102 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37576512 kB' 'MemAvailable: 42316400 kB' 'Buffers: 2696 kB' 'Cached: 18387308 kB' 'SwapCached: 0 kB' 'Active: 14401224 kB' 'Inactive: 4476780 kB' 'Active(anon): 13766092 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491184 kB' 'Mapped: 225708 kB' 'Shmem: 13278092 kB' 'KReclaimable: 243200 kB' 'Slab: 637200 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394000 kB' 'KernelStack: 13120 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14896980 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199116 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.103 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.104 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:31.105 nr_hugepages=1024 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:31.105 resv_hugepages=0 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:31.105 surplus_hugepages=0 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:31.105 anon_hugepages=0 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37576952 kB' 'MemAvailable: 42316840 kB' 'Buffers: 2696 kB' 'Cached: 18387344 kB' 'SwapCached: 0 kB' 'Active: 14401256 kB' 'Inactive: 4476780 kB' 'Active(anon): 13766124 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491264 kB' 'Mapped: 225712 kB' 'Shmem: 13278128 kB' 'KReclaimable: 243200 kB' 'Slab: 637264 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 394064 kB' 'KernelStack: 13280 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14898004 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199196 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.105 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22619268 kB' 'MemUsed: 10210616 kB' 'SwapCached: 0 kB' 'Active: 7748196 kB' 'Inactive: 342492 kB' 'Active(anon): 7317804 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 342492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855384 kB' 'Mapped: 81308 kB' 'AnonPages: 238520 kB' 'Shmem: 7082500 kB' 'KernelStack: 8360 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120780 kB' 'Slab: 334504 kB' 'SReclaimable: 120780 kB' 'SUnreclaim: 213724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.106 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.107 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14965408 kB' 'MemUsed: 12746436 kB' 'SwapCached: 0 kB' 'Active: 6653568 kB' 'Inactive: 4134288 kB' 'Active(anon): 6448828 kB' 'Inactive(anon): 0 kB' 'Active(file): 204740 kB' 'Inactive(file): 4134288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10534676 kB' 'Mapped: 144404 kB' 'AnonPages: 253296 kB' 'Shmem: 6195648 kB' 'KernelStack: 5128 kB' 'PageTables: 5484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122420 kB' 'Slab: 302776 kB' 'SReclaimable: 122420 kB' 'SUnreclaim: 180356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.108 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.368 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:31.369 node0=512 expecting 512 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:31.369 node1=512 expecting 512 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:31.369 00:02:31.369 real 0m1.567s 00:02:31.369 user 0m0.652s 00:02:31.369 sys 0m0.883s 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:31.369 10:42:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:31.369 ************************************ 00:02:31.369 END TEST even_2G_alloc 00:02:31.369 ************************************ 00:02:31.369 10:42:47 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:31.369 10:42:47 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:31.369 10:42:47 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:31.369 10:42:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:31.369 ************************************ 00:02:31.369 START TEST odd_alloc 00:02:31.369 ************************************ 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1121 -- # odd_alloc 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.369 10:42:47 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:32.751 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:32.751 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:32.751 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:32.751 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:32.751 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:32.751 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:32.751 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:32.751 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:32.751 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:32.751 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:32.751 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:32.751 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:32.751 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:32.751 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:32.751 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:32.751 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:32.751 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37589744 kB' 'MemAvailable: 42329632 kB' 'Buffers: 2696 kB' 'Cached: 18387424 kB' 'SwapCached: 0 kB' 'Active: 14393612 kB' 'Inactive: 4476780 kB' 'Active(anon): 13758480 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483468 kB' 'Mapped: 224864 kB' 'Shmem: 13278208 kB' 'KReclaimable: 243200 kB' 'Slab: 637116 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393916 kB' 'KernelStack: 12960 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14868132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.751 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.752 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37589788 kB' 'MemAvailable: 42329676 kB' 'Buffers: 2696 kB' 'Cached: 18387428 kB' 'SwapCached: 0 kB' 'Active: 14394320 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759188 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484344 kB' 'Mapped: 224948 kB' 'Shmem: 13278212 kB' 'KReclaimable: 243200 kB' 'Slab: 637108 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393908 kB' 'KernelStack: 12960 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14868148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.753 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.754 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37589788 kB' 'MemAvailable: 42329676 kB' 'Buffers: 2696 kB' 'Cached: 18387448 kB' 'SwapCached: 0 kB' 'Active: 14394024 kB' 'Inactive: 4476780 kB' 'Active(anon): 13758892 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483916 kB' 'Mapped: 224856 kB' 'Shmem: 13278232 kB' 'KReclaimable: 243200 kB' 'Slab: 637096 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393896 kB' 'KernelStack: 12960 kB' 'PageTables: 8284 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14868172 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.755 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:32.756 nr_hugepages=1025 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:32.756 resv_hugepages=0 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:32.756 surplus_hugepages=0 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:32.756 anon_hugepages=0 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.756 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37590108 kB' 'MemAvailable: 42329996 kB' 'Buffers: 2696 kB' 'Cached: 18387452 kB' 'SwapCached: 0 kB' 'Active: 14393872 kB' 'Inactive: 4476780 kB' 'Active(anon): 13758740 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483832 kB' 'Mapped: 224916 kB' 'Shmem: 13278236 kB' 'KReclaimable: 243200 kB' 'Slab: 637064 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393864 kB' 'KernelStack: 12976 kB' 'PageTables: 8352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 14868192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.757 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22625396 kB' 'MemUsed: 10204488 kB' 'SwapCached: 0 kB' 'Active: 7745592 kB' 'Inactive: 342492 kB' 'Active(anon): 7315200 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 342492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855480 kB' 'Mapped: 80556 kB' 'AnonPages: 235788 kB' 'Shmem: 7082596 kB' 'KernelStack: 8184 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120780 kB' 'Slab: 334452 kB' 'SReclaimable: 120780 kB' 'SUnreclaim: 213672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.758 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:32.759 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 14964716 kB' 'MemUsed: 12747128 kB' 'SwapCached: 0 kB' 'Active: 6648012 kB' 'Inactive: 4134288 kB' 'Active(anon): 6443272 kB' 'Inactive(anon): 0 kB' 'Active(file): 204740 kB' 'Inactive(file): 4134288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10534724 kB' 'Mapped: 144300 kB' 'AnonPages: 247636 kB' 'Shmem: 6195696 kB' 'KernelStack: 4744 kB' 'PageTables: 4064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122420 kB' 'Slab: 302612 kB' 'SReclaimable: 122420 kB' 'SUnreclaim: 180192 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.760 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:32.761 node0=512 expecting 513 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:32.761 node1=513 expecting 512 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:32.761 00:02:32.761 real 0m1.576s 00:02:32.761 user 0m0.699s 00:02:32.761 sys 0m0.844s 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:32.761 10:42:48 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:32.761 ************************************ 00:02:32.761 END TEST odd_alloc 00:02:32.761 ************************************ 00:02:33.020 10:42:48 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:33.020 10:42:48 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:33.020 10:42:48 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:33.020 10:42:48 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:33.020 ************************************ 00:02:33.020 START TEST custom_alloc 00:02:33.020 ************************************ 00:02:33.020 10:42:49 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1121 -- # custom_alloc 00:02:33.020 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:02:33.020 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:02:33.020 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:33.020 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:33.021 10:42:49 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:34.402 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:34.402 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:34.402 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:34.402 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:34.402 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:34.402 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:34.402 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:34.402 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:34.402 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:34.402 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:34.402 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:34.402 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:34.402 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:34.402 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:34.402 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:34.402 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:34.402 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36543900 kB' 'MemAvailable: 41283788 kB' 'Buffers: 2696 kB' 'Cached: 18387564 kB' 'SwapCached: 0 kB' 'Active: 14394264 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759132 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483980 kB' 'Mapped: 224864 kB' 'Shmem: 13278348 kB' 'KReclaimable: 243200 kB' 'Slab: 636900 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393700 kB' 'KernelStack: 12960 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14868564 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.403 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36547084 kB' 'MemAvailable: 41286972 kB' 'Buffers: 2696 kB' 'Cached: 18387568 kB' 'SwapCached: 0 kB' 'Active: 14393908 kB' 'Inactive: 4476780 kB' 'Active(anon): 13758776 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483624 kB' 'Mapped: 224864 kB' 'Shmem: 13278352 kB' 'KReclaimable: 243200 kB' 'Slab: 636836 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393636 kB' 'KernelStack: 12960 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14868580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.404 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.405 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36546332 kB' 'MemAvailable: 41286220 kB' 'Buffers: 2696 kB' 'Cached: 18387584 kB' 'SwapCached: 0 kB' 'Active: 14393316 kB' 'Inactive: 4476780 kB' 'Active(anon): 13758184 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483056 kB' 'Mapped: 224856 kB' 'Shmem: 13278368 kB' 'KReclaimable: 243200 kB' 'Slab: 636876 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393676 kB' 'KernelStack: 12976 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14868604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198956 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.406 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.407 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:34.408 nr_hugepages=1536 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:34.408 resv_hugepages=0 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:34.408 surplus_hugepages=0 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:34.408 anon_hugepages=0 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.408 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 36546332 kB' 'MemAvailable: 41286220 kB' 'Buffers: 2696 kB' 'Cached: 18387604 kB' 'SwapCached: 0 kB' 'Active: 14393344 kB' 'Inactive: 4476780 kB' 'Active(anon): 13758212 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483048 kB' 'Mapped: 224856 kB' 'Shmem: 13278388 kB' 'KReclaimable: 243200 kB' 'Slab: 636876 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393676 kB' 'KernelStack: 12976 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 14868624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198972 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.409 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 22630980 kB' 'MemUsed: 10198904 kB' 'SwapCached: 0 kB' 'Active: 7744600 kB' 'Inactive: 342492 kB' 'Active(anon): 7314208 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 342492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855552 kB' 'Mapped: 80564 kB' 'AnonPages: 234688 kB' 'Shmem: 7082668 kB' 'KernelStack: 8200 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120780 kB' 'Slab: 334324 kB' 'SReclaimable: 120780 kB' 'SUnreclaim: 213544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.410 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.411 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711844 kB' 'MemFree: 13915352 kB' 'MemUsed: 13796492 kB' 'SwapCached: 0 kB' 'Active: 6648728 kB' 'Inactive: 4134288 kB' 'Active(anon): 6443988 kB' 'Inactive(anon): 0 kB' 'Active(file): 204740 kB' 'Inactive(file): 4134288 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 10534772 kB' 'Mapped: 144292 kB' 'AnonPages: 248324 kB' 'Shmem: 6195744 kB' 'KernelStack: 4760 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122420 kB' 'Slab: 302552 kB' 'SReclaimable: 122420 kB' 'SUnreclaim: 180132 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.412 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.413 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.413 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.413 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.413 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.671 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:34.672 node0=512 expecting 512 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:34.672 node1=1024 expecting 1024 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:34.672 00:02:34.672 real 0m1.629s 00:02:34.672 user 0m0.681s 00:02:34.672 sys 0m0.915s 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:34.672 10:42:50 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:34.672 ************************************ 00:02:34.672 END TEST custom_alloc 00:02:34.672 ************************************ 00:02:34.672 10:42:50 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:34.672 10:42:50 setup.sh.hugepages -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:34.672 10:42:50 setup.sh.hugepages -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:34.672 10:42:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:34.672 ************************************ 00:02:34.672 START TEST no_shrink_alloc 00:02:34.672 ************************************ 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1121 -- # no_shrink_alloc 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.672 10:42:50 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:36.051 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.051 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:36.051 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.051 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.051 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.051 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.051 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.051 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.051 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.051 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:36.051 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:36.051 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:36.051 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:36.051 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:36.051 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:36.051 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:36.051 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37592796 kB' 'MemAvailable: 42332684 kB' 'Buffers: 2696 kB' 'Cached: 18387688 kB' 'SwapCached: 0 kB' 'Active: 14394448 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759316 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483988 kB' 'Mapped: 224892 kB' 'Shmem: 13278472 kB' 'KReclaimable: 243200 kB' 'Slab: 636820 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393620 kB' 'KernelStack: 12944 kB' 'PageTables: 8160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14869116 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199068 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.051 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.052 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37596512 kB' 'MemAvailable: 42336400 kB' 'Buffers: 2696 kB' 'Cached: 18387688 kB' 'SwapCached: 0 kB' 'Active: 14395096 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759964 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484720 kB' 'Mapped: 224892 kB' 'Shmem: 13278472 kB' 'KReclaimable: 243200 kB' 'Slab: 636784 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393584 kB' 'KernelStack: 12960 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14869132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199036 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.053 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37594880 kB' 'MemAvailable: 42334768 kB' 'Buffers: 2696 kB' 'Cached: 18387712 kB' 'SwapCached: 0 kB' 'Active: 14394896 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759764 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484504 kB' 'Mapped: 224872 kB' 'Shmem: 13278496 kB' 'KReclaimable: 243200 kB' 'Slab: 636864 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393664 kB' 'KernelStack: 13008 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14868788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199020 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.054 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.055 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.317 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:36.318 nr_hugepages=1024 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:36.318 resv_hugepages=0 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:36.318 surplus_hugepages=0 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:36.318 anon_hugepages=0 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37594880 kB' 'MemAvailable: 42334768 kB' 'Buffers: 2696 kB' 'Cached: 18387728 kB' 'SwapCached: 0 kB' 'Active: 14394688 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759556 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484308 kB' 'Mapped: 224872 kB' 'Shmem: 13278512 kB' 'KReclaimable: 243200 kB' 'Slab: 636864 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393664 kB' 'KernelStack: 12992 kB' 'PageTables: 8264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14868812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.318 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.319 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21594856 kB' 'MemUsed: 11235028 kB' 'SwapCached: 0 kB' 'Active: 7745648 kB' 'Inactive: 342492 kB' 'Active(anon): 7315256 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 342492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855648 kB' 'Mapped: 80580 kB' 'AnonPages: 235664 kB' 'Shmem: 7082764 kB' 'KernelStack: 8216 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120780 kB' 'Slab: 334304 kB' 'SReclaimable: 120780 kB' 'SUnreclaim: 213524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.320 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:36.321 node0=1024 expecting 1024 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:02:36.321 10:42:52 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:37.700 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.700 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.700 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.700 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.700 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.700 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.700 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.700 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.700 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.700 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:02:37.700 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:02:37.700 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:02:37.700 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:02:37.700 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:02:37.700 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:02:37.700 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:02:37.700 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:02:37.700 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37605920 kB' 'MemAvailable: 42345808 kB' 'Buffers: 2696 kB' 'Cached: 18387804 kB' 'SwapCached: 0 kB' 'Active: 14395048 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759916 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484604 kB' 'Mapped: 225008 kB' 'Shmem: 13278588 kB' 'KReclaimable: 243200 kB' 'Slab: 637060 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393860 kB' 'KernelStack: 12976 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14869360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.700 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.701 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37609436 kB' 'MemAvailable: 42349324 kB' 'Buffers: 2696 kB' 'Cached: 18387808 kB' 'SwapCached: 0 kB' 'Active: 14395256 kB' 'Inactive: 4476780 kB' 'Active(anon): 13760124 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484768 kB' 'Mapped: 225008 kB' 'Shmem: 13278592 kB' 'KReclaimable: 243200 kB' 'Slab: 637008 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393808 kB' 'KernelStack: 12928 kB' 'PageTables: 8040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14869380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 198988 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.702 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.703 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37609360 kB' 'MemAvailable: 42349248 kB' 'Buffers: 2696 kB' 'Cached: 18387824 kB' 'SwapCached: 0 kB' 'Active: 14394564 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759432 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484064 kB' 'Mapped: 224880 kB' 'Shmem: 13278608 kB' 'KReclaimable: 243200 kB' 'Slab: 637088 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393888 kB' 'KernelStack: 12992 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14869400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.704 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:37.705 nr_hugepages=1024 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.705 resv_hugepages=0 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.705 surplus_hugepages=0 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.705 anon_hugepages=0 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:37.705 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 37608604 kB' 'MemAvailable: 42348492 kB' 'Buffers: 2696 kB' 'Cached: 18387848 kB' 'SwapCached: 0 kB' 'Active: 14394604 kB' 'Inactive: 4476780 kB' 'Active(anon): 13759472 kB' 'Inactive(anon): 0 kB' 'Active(file): 635132 kB' 'Inactive(file): 4476780 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484064 kB' 'Mapped: 224880 kB' 'Shmem: 13278632 kB' 'KReclaimable: 243200 kB' 'Slab: 637088 kB' 'SReclaimable: 243200 kB' 'SUnreclaim: 393888 kB' 'KernelStack: 12992 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 14869424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 199004 kB' 'VmallocChunk: 0 kB' 'Percpu: 40512 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2807388 kB' 'DirectMap2M: 19132416 kB' 'DirectMap1G: 47185920 kB' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.706 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.707 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21612340 kB' 'MemUsed: 11217544 kB' 'SwapCached: 0 kB' 'Active: 7746488 kB' 'Inactive: 342492 kB' 'Active(anon): 7316096 kB' 'Inactive(anon): 0 kB' 'Active(file): 430392 kB' 'Inactive(file): 342492 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7855776 kB' 'Mapped: 80584 kB' 'AnonPages: 236356 kB' 'Shmem: 7082892 kB' 'KernelStack: 8264 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120780 kB' 'Slab: 334416 kB' 'SReclaimable: 120780 kB' 'SUnreclaim: 213636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.708 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:37.709 node0=1024 expecting 1024 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:37.709 00:02:37.709 real 0m3.226s 00:02:37.709 user 0m1.338s 00:02:37.709 sys 0m1.824s 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:37.709 10:42:53 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:02:37.709 ************************************ 00:02:37.709 END TEST no_shrink_alloc 00:02:37.709 ************************************ 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:37.999 10:42:53 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:37.999 00:02:37.999 real 0m12.482s 00:02:37.999 user 0m4.810s 00:02:37.999 sys 0m6.470s 00:02:37.999 10:42:53 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:37.999 10:42:53 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:02:37.999 ************************************ 00:02:37.999 END TEST hugepages 00:02:37.999 ************************************ 00:02:37.999 10:42:53 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:37.999 10:42:53 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:37.999 10:42:53 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:37.999 10:42:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:37.999 ************************************ 00:02:37.999 START TEST driver 00:02:37.999 ************************************ 00:02:37.999 10:42:54 setup.sh.driver -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:02:37.999 * Looking for test storage... 00:02:37.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:37.999 10:42:54 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:02:37.999 10:42:54 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:37.999 10:42:54 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:40.531 10:42:56 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:40.531 10:42:56 setup.sh.driver -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:40.531 10:42:56 setup.sh.driver -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:40.531 10:42:56 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:40.531 ************************************ 00:02:40.531 START TEST guess_driver 00:02:40.531 ************************************ 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- common/autotest_common.sh@1121 -- # guess_driver 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 189 > 0 )) 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:40.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:40.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:40.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:40.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:40.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:40.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:40.531 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:40.531 Looking for driver=vfio-pci 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:02:40.531 10:42:56 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:41.907 10:42:57 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:42.843 10:42:58 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:42.843 10:42:58 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:42.843 10:42:58 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:43.102 10:42:59 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:43.102 10:42:59 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:02:43.102 10:42:59 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:43.102 10:42:59 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.636 00:02:45.636 real 0m4.989s 00:02:45.636 user 0m1.182s 00:02:45.636 sys 0m1.974s 00:02:45.636 10:43:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:45.636 10:43:01 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:02:45.636 ************************************ 00:02:45.636 END TEST guess_driver 00:02:45.636 ************************************ 00:02:45.636 00:02:45.636 real 0m7.559s 00:02:45.636 user 0m1.761s 00:02:45.636 sys 0m3.101s 00:02:45.636 10:43:01 setup.sh.driver -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:45.636 10:43:01 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:02:45.636 ************************************ 00:02:45.636 END TEST driver 00:02:45.636 ************************************ 00:02:45.636 10:43:01 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:45.636 10:43:01 setup.sh -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:45.636 10:43:01 setup.sh -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:45.636 10:43:01 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:45.636 ************************************ 00:02:45.636 START TEST devices 00:02:45.636 ************************************ 00:02:45.636 10:43:01 setup.sh.devices -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:02:45.636 * Looking for test storage... 00:02:45.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:45.636 10:43:01 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:02:45.636 10:43:01 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:02:45.636 10:43:01 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:45.636 10:43:01 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # zoned_devs=() 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1665 -- # local -gA zoned_devs 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1666 -- # local nvme bdf 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1668 -- # for nvme in /sys/block/nvme* 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1669 -- # is_block_zoned nvme0n1 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:02:47.539 10:43:03 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:02:47.539 10:43:03 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:02:47.539 No valid GPT data, bailing 00:02:47.539 10:43:03 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:47.539 10:43:03 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:02:47.539 10:43:03 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:02:47.539 10:43:03 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:02:47.539 10:43:03 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:02:47.539 10:43:03 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:02:47.539 10:43:03 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:47.539 10:43:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:47.539 ************************************ 00:02:47.539 START TEST nvme_mount 00:02:47.539 ************************************ 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1121 -- # nvme_mount 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:47.539 10:43:03 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:02:48.475 Creating new GPT entries in memory. 00:02:48.475 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:48.475 other utilities. 00:02:48.475 10:43:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:48.475 10:43:04 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:48.475 10:43:04 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:48.475 10:43:04 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:48.475 10:43:04 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:49.413 Creating new GPT entries in memory. 00:02:49.413 The operation has completed successfully. 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2656982 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.413 10:43:05 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:50.788 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:50.788 10:43:06 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:51.048 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:02:51.048 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:02:51.048 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:02:51.048 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.048 10:43:07 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:52.421 10:43:08 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:53.794 10:43:09 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:02:53.794 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:53.794 00:02:53.794 real 0m6.673s 00:02:53.794 user 0m1.633s 00:02:53.794 sys 0m2.642s 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:53.794 10:43:10 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:02:53.794 ************************************ 00:02:53.794 END TEST nvme_mount 00:02:53.794 ************************************ 00:02:54.054 10:43:10 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:02:54.054 10:43:10 setup.sh.devices -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:02:54.054 10:43:10 setup.sh.devices -- common/autotest_common.sh@1103 -- # xtrace_disable 00:02:54.054 10:43:10 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:02:54.054 ************************************ 00:02:54.054 START TEST dm_mount 00:02:54.054 ************************************ 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- common/autotest_common.sh@1121 -- # dm_mount 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:02:54.054 10:43:10 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:02:54.988 Creating new GPT entries in memory. 00:02:54.988 GPT data structures destroyed! You may now partition the disk using fdisk or 00:02:54.988 other utilities. 00:02:54.988 10:43:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:02:54.988 10:43:11 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:54.988 10:43:11 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:54.988 10:43:11 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:54.988 10:43:11 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:02:55.926 Creating new GPT entries in memory. 00:02:55.926 The operation has completed successfully. 00:02:55.926 10:43:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:55.926 10:43:12 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:55.926 10:43:12 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:02:55.926 10:43:12 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:02:55.926 10:43:12 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:02:56.899 The operation has completed successfully. 00:02:56.899 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:02:56.899 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:02:56.899 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2659658 00:02:56.899 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:02:56.899 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:56.899 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:56.899 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.156 10:43:13 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:02:58.529 10:43:14 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:02:59.908 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:02:59.908 00:02:59.908 real 0m5.892s 00:02:59.908 user 0m1.084s 00:02:59.908 sys 0m1.686s 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # xtrace_disable 00:02:59.908 10:43:15 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:02:59.908 ************************************ 00:02:59.908 END TEST dm_mount 00:02:59.908 ************************************ 00:02:59.908 10:43:15 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:02:59.908 10:43:15 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:02:59.908 10:43:15 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:02:59.908 10:43:15 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:02:59.908 10:43:15 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:02:59.908 10:43:15 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:02:59.908 10:43:15 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:00.166 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:00.166 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:00.166 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:00.166 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:00.166 10:43:16 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:00.166 10:43:16 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:00.166 10:43:16 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:00.166 10:43:16 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:00.166 10:43:16 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:00.166 10:43:16 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:00.166 10:43:16 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:00.166 00:03:00.166 real 0m14.646s 00:03:00.166 user 0m3.435s 00:03:00.166 sys 0m5.457s 00:03:00.166 10:43:16 setup.sh.devices -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:00.166 10:43:16 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:00.166 ************************************ 00:03:00.166 END TEST devices 00:03:00.166 ************************************ 00:03:00.166 00:03:00.166 real 0m46.430s 00:03:00.166 user 0m13.762s 00:03:00.166 sys 0m21.220s 00:03:00.166 10:43:16 setup.sh -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:00.166 10:43:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.166 ************************************ 00:03:00.166 END TEST setup.sh 00:03:00.166 ************************************ 00:03:00.166 10:43:16 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:01.540 Hugepages 00:03:01.540 node hugesize free / total 00:03:01.540 node0 1048576kB 0 / 0 00:03:01.540 node0 2048kB 2048 / 2048 00:03:01.540 node1 1048576kB 0 / 0 00:03:01.540 node1 2048kB 0 / 0 00:03:01.540 00:03:01.540 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:01.540 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:01.540 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:01.540 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:01.540 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:01.540 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:01.540 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:01.540 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:01.540 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:01.540 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:01.540 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:01.540 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:01.540 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:01.540 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:01.540 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:01.540 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:01.540 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:01.540 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:01.540 10:43:17 -- spdk/autotest.sh@139 -- # uname -s 00:03:01.540 10:43:17 -- spdk/autotest.sh@139 -- # [[ Linux == Linux ]] 00:03:01.540 10:43:17 -- spdk/autotest.sh@141 -- # nvme_namespace_revert 00:03:01.540 10:43:17 -- common/autotest_common.sh@1527 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.917 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:02.917 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:02.917 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:02.917 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:02.917 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:02.917 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:02.917 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:02.917 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:02.917 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:02.917 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:02.917 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:02.917 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:02.917 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:02.917 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:02.917 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:02.917 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:03.854 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:04.113 10:43:20 -- common/autotest_common.sh@1528 -- # sleep 1 00:03:05.048 10:43:21 -- common/autotest_common.sh@1529 -- # bdfs=() 00:03:05.048 10:43:21 -- common/autotest_common.sh@1529 -- # local bdfs 00:03:05.048 10:43:21 -- common/autotest_common.sh@1530 -- # bdfs=($(get_nvme_bdfs)) 00:03:05.048 10:43:21 -- common/autotest_common.sh@1530 -- # get_nvme_bdfs 00:03:05.048 10:43:21 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:05.048 10:43:21 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:05.048 10:43:21 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:05.048 10:43:21 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:05.048 10:43:21 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:05.306 10:43:21 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:05.306 10:43:21 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:05.306 10:43:21 -- common/autotest_common.sh@1532 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:06.683 Waiting for block devices as requested 00:03:06.683 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:06.683 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:06.683 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:06.683 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:06.942 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:06.942 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:06.942 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:06.942 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:06.942 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:07.201 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:07.201 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:07.201 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:07.458 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:07.458 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:07.458 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:07.458 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:07.717 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:07.717 10:43:23 -- common/autotest_common.sh@1534 -- # for bdf in "${bdfs[@]}" 00:03:07.717 10:43:23 -- common/autotest_common.sh@1535 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:07.717 10:43:23 -- common/autotest_common.sh@1498 -- # readlink -f /sys/class/nvme/nvme0 00:03:07.717 10:43:23 -- common/autotest_common.sh@1498 -- # grep 0000:88:00.0/nvme/nvme 00:03:07.717 10:43:23 -- common/autotest_common.sh@1498 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:07.717 10:43:23 -- common/autotest_common.sh@1499 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:07.717 10:43:23 -- common/autotest_common.sh@1503 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:07.717 10:43:23 -- common/autotest_common.sh@1503 -- # printf '%s\n' nvme0 00:03:07.717 10:43:23 -- common/autotest_common.sh@1535 -- # nvme_ctrlr=/dev/nvme0 00:03:07.717 10:43:23 -- common/autotest_common.sh@1536 -- # [[ -z /dev/nvme0 ]] 00:03:07.717 10:43:23 -- common/autotest_common.sh@1541 -- # nvme id-ctrl /dev/nvme0 00:03:07.717 10:43:23 -- common/autotest_common.sh@1541 -- # grep oacs 00:03:07.717 10:43:23 -- common/autotest_common.sh@1541 -- # cut -d: -f2 00:03:07.717 10:43:23 -- common/autotest_common.sh@1541 -- # oacs=' 0xf' 00:03:07.717 10:43:23 -- common/autotest_common.sh@1542 -- # oacs_ns_manage=8 00:03:07.717 10:43:23 -- common/autotest_common.sh@1544 -- # [[ 8 -ne 0 ]] 00:03:07.717 10:43:23 -- common/autotest_common.sh@1550 -- # nvme id-ctrl /dev/nvme0 00:03:07.717 10:43:23 -- common/autotest_common.sh@1550 -- # grep unvmcap 00:03:07.717 10:43:23 -- common/autotest_common.sh@1550 -- # cut -d: -f2 00:03:07.717 10:43:23 -- common/autotest_common.sh@1550 -- # unvmcap=' 0' 00:03:07.717 10:43:23 -- common/autotest_common.sh@1551 -- # [[ 0 -eq 0 ]] 00:03:07.717 10:43:23 -- common/autotest_common.sh@1553 -- # continue 00:03:07.717 10:43:23 -- spdk/autotest.sh@144 -- # timing_exit pre_cleanup 00:03:07.717 10:43:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:07.717 10:43:23 -- common/autotest_common.sh@10 -- # set +x 00:03:07.717 10:43:23 -- spdk/autotest.sh@147 -- # timing_enter afterboot 00:03:07.717 10:43:23 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:07.717 10:43:23 -- common/autotest_common.sh@10 -- # set +x 00:03:07.717 10:43:23 -- spdk/autotest.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.092 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:09.092 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:09.092 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:09.092 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:09.092 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:09.092 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:09.092 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:09.092 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:09.092 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:09.092 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:09.092 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:09.351 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:09.351 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:09.351 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:09.351 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:09.351 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:10.290 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:10.290 10:43:26 -- spdk/autotest.sh@149 -- # timing_exit afterboot 00:03:10.290 10:43:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:10.290 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:03:10.290 10:43:26 -- spdk/autotest.sh@153 -- # opal_revert_cleanup 00:03:10.290 10:43:26 -- common/autotest_common.sh@1587 -- # mapfile -t bdfs 00:03:10.290 10:43:26 -- common/autotest_common.sh@1587 -- # get_nvme_bdfs_by_id 0x0a54 00:03:10.290 10:43:26 -- common/autotest_common.sh@1573 -- # bdfs=() 00:03:10.290 10:43:26 -- common/autotest_common.sh@1573 -- # local bdfs 00:03:10.290 10:43:26 -- common/autotest_common.sh@1575 -- # get_nvme_bdfs 00:03:10.290 10:43:26 -- common/autotest_common.sh@1509 -- # bdfs=() 00:03:10.290 10:43:26 -- common/autotest_common.sh@1509 -- # local bdfs 00:03:10.290 10:43:26 -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:10.290 10:43:26 -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:10.290 10:43:26 -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:03:10.549 10:43:26 -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:03:10.549 10:43:26 -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:03:10.549 10:43:26 -- common/autotest_common.sh@1575 -- # for bdf in $(get_nvme_bdfs) 00:03:10.549 10:43:26 -- common/autotest_common.sh@1576 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:10.549 10:43:26 -- common/autotest_common.sh@1576 -- # device=0x0a54 00:03:10.549 10:43:26 -- common/autotest_common.sh@1577 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:10.549 10:43:26 -- common/autotest_common.sh@1578 -- # bdfs+=($bdf) 00:03:10.549 10:43:26 -- common/autotest_common.sh@1582 -- # printf '%s\n' 0000:88:00.0 00:03:10.549 10:43:26 -- common/autotest_common.sh@1588 -- # [[ -z 0000:88:00.0 ]] 00:03:10.549 10:43:26 -- common/autotest_common.sh@1593 -- # spdk_tgt_pid=2665568 00:03:10.549 10:43:26 -- common/autotest_common.sh@1592 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:10.549 10:43:26 -- common/autotest_common.sh@1594 -- # waitforlisten 2665568 00:03:10.549 10:43:26 -- common/autotest_common.sh@827 -- # '[' -z 2665568 ']' 00:03:10.549 10:43:26 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:10.549 10:43:26 -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:10.549 10:43:26 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:10.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:10.549 10:43:26 -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:10.549 10:43:26 -- common/autotest_common.sh@10 -- # set +x 00:03:10.549 [2024-05-15 10:43:26.616475] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:10.549 [2024-05-15 10:43:26.616584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2665568 ] 00:03:10.549 EAL: No free 2048 kB hugepages reported on node 1 00:03:10.549 [2024-05-15 10:43:26.686113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:10.809 [2024-05-15 10:43:26.796570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:11.105 10:43:27 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:11.105 10:43:27 -- common/autotest_common.sh@860 -- # return 0 00:03:11.105 10:43:27 -- common/autotest_common.sh@1596 -- # bdf_id=0 00:03:11.105 10:43:27 -- common/autotest_common.sh@1597 -- # for bdf in "${bdfs[@]}" 00:03:11.105 10:43:27 -- common/autotest_common.sh@1598 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:14.388 nvme0n1 00:03:14.388 10:43:30 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:14.388 [2024-05-15 10:43:30.417671] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:14.388 [2024-05-15 10:43:30.417715] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:14.388 request: 00:03:14.388 { 00:03:14.388 "nvme_ctrlr_name": "nvme0", 00:03:14.388 "password": "test", 00:03:14.388 "method": "bdev_nvme_opal_revert", 00:03:14.388 "req_id": 1 00:03:14.388 } 00:03:14.388 Got JSON-RPC error response 00:03:14.388 response: 00:03:14.388 { 00:03:14.388 "code": -32603, 00:03:14.388 "message": "Internal error" 00:03:14.388 } 00:03:14.388 10:43:30 -- common/autotest_common.sh@1600 -- # true 00:03:14.388 10:43:30 -- common/autotest_common.sh@1601 -- # (( ++bdf_id )) 00:03:14.388 10:43:30 -- common/autotest_common.sh@1604 -- # killprocess 2665568 00:03:14.388 10:43:30 -- common/autotest_common.sh@946 -- # '[' -z 2665568 ']' 00:03:14.388 10:43:30 -- common/autotest_common.sh@950 -- # kill -0 2665568 00:03:14.388 10:43:30 -- common/autotest_common.sh@951 -- # uname 00:03:14.388 10:43:30 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:14.388 10:43:30 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2665568 00:03:14.388 10:43:30 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:14.388 10:43:30 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:14.388 10:43:30 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2665568' 00:03:14.388 killing process with pid 2665568 00:03:14.388 10:43:30 -- common/autotest_common.sh@965 -- # kill 2665568 00:03:14.388 10:43:30 -- common/autotest_common.sh@970 -- # wait 2665568 00:03:16.349 10:43:32 -- spdk/autotest.sh@159 -- # '[' 0 -eq 1 ']' 00:03:16.349 10:43:32 -- spdk/autotest.sh@163 -- # '[' 1 -eq 1 ']' 00:03:16.349 10:43:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:16.349 10:43:32 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:16.349 10:43:32 -- spdk/autotest.sh@171 -- # timing_enter lib 00:03:16.349 10:43:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:16.349 10:43:32 -- common/autotest_common.sh@10 -- # set +x 00:03:16.349 10:43:32 -- spdk/autotest.sh@173 -- # [[ 0 -eq 1 ]] 00:03:16.349 10:43:32 -- spdk/autotest.sh@177 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:16.349 10:43:32 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:16.349 10:43:32 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:16.349 10:43:32 -- common/autotest_common.sh@10 -- # set +x 00:03:16.349 ************************************ 00:03:16.349 START TEST env 00:03:16.349 ************************************ 00:03:16.349 10:43:32 env -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:16.349 * Looking for test storage... 00:03:16.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:16.349 10:43:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:16.349 10:43:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:16.349 10:43:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:16.349 10:43:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.349 ************************************ 00:03:16.349 START TEST env_memory 00:03:16.349 ************************************ 00:03:16.349 10:43:32 env.env_memory -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:16.349 00:03:16.349 00:03:16.349 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.349 http://cunit.sourceforge.net/ 00:03:16.349 00:03:16.349 00:03:16.349 Suite: memory 00:03:16.349 Test: alloc and free memory map ...[2024-05-15 10:43:32.421169] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:16.349 passed 00:03:16.349 Test: mem map translation ...[2024-05-15 10:43:32.441801] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:16.349 [2024-05-15 10:43:32.441824] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:16.349 [2024-05-15 10:43:32.441866] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:16.349 [2024-05-15 10:43:32.441879] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:16.349 passed 00:03:16.349 Test: mem map registration ...[2024-05-15 10:43:32.483502] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:16.349 [2024-05-15 10:43:32.483523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:16.349 passed 00:03:16.349 Test: mem map adjacent registrations ...passed 00:03:16.349 00:03:16.349 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.349 suites 1 1 n/a 0 0 00:03:16.349 tests 4 4 4 0 0 00:03:16.349 asserts 152 152 152 0 n/a 00:03:16.349 00:03:16.349 Elapsed time = 0.142 seconds 00:03:16.349 00:03:16.349 real 0m0.149s 00:03:16.349 user 0m0.144s 00:03:16.349 sys 0m0.004s 00:03:16.349 10:43:32 env.env_memory -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:16.349 10:43:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:16.349 ************************************ 00:03:16.349 END TEST env_memory 00:03:16.349 ************************************ 00:03:16.349 10:43:32 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:16.349 10:43:32 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:16.349 10:43:32 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:16.349 10:43:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.608 ************************************ 00:03:16.608 START TEST env_vtophys 00:03:16.608 ************************************ 00:03:16.608 10:43:32 env.env_vtophys -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:16.608 EAL: lib.eal log level changed from notice to debug 00:03:16.608 EAL: Detected lcore 0 as core 0 on socket 0 00:03:16.608 EAL: Detected lcore 1 as core 1 on socket 0 00:03:16.608 EAL: Detected lcore 2 as core 2 on socket 0 00:03:16.608 EAL: Detected lcore 3 as core 3 on socket 0 00:03:16.608 EAL: Detected lcore 4 as core 4 on socket 0 00:03:16.608 EAL: Detected lcore 5 as core 5 on socket 0 00:03:16.608 EAL: Detected lcore 6 as core 8 on socket 0 00:03:16.608 EAL: Detected lcore 7 as core 9 on socket 0 00:03:16.608 EAL: Detected lcore 8 as core 10 on socket 0 00:03:16.608 EAL: Detected lcore 9 as core 11 on socket 0 00:03:16.608 EAL: Detected lcore 10 as core 12 on socket 0 00:03:16.608 EAL: Detected lcore 11 as core 13 on socket 0 00:03:16.608 EAL: Detected lcore 12 as core 0 on socket 1 00:03:16.608 EAL: Detected lcore 13 as core 1 on socket 1 00:03:16.608 EAL: Detected lcore 14 as core 2 on socket 1 00:03:16.608 EAL: Detected lcore 15 as core 3 on socket 1 00:03:16.608 EAL: Detected lcore 16 as core 4 on socket 1 00:03:16.608 EAL: Detected lcore 17 as core 5 on socket 1 00:03:16.608 EAL: Detected lcore 18 as core 8 on socket 1 00:03:16.608 EAL: Detected lcore 19 as core 9 on socket 1 00:03:16.608 EAL: Detected lcore 20 as core 10 on socket 1 00:03:16.608 EAL: Detected lcore 21 as core 11 on socket 1 00:03:16.608 EAL: Detected lcore 22 as core 12 on socket 1 00:03:16.608 EAL: Detected lcore 23 as core 13 on socket 1 00:03:16.608 EAL: Detected lcore 24 as core 0 on socket 0 00:03:16.608 EAL: Detected lcore 25 as core 1 on socket 0 00:03:16.608 EAL: Detected lcore 26 as core 2 on socket 0 00:03:16.608 EAL: Detected lcore 27 as core 3 on socket 0 00:03:16.608 EAL: Detected lcore 28 as core 4 on socket 0 00:03:16.608 EAL: Detected lcore 29 as core 5 on socket 0 00:03:16.608 EAL: Detected lcore 30 as core 8 on socket 0 00:03:16.608 EAL: Detected lcore 31 as core 9 on socket 0 00:03:16.608 EAL: Detected lcore 32 as core 10 on socket 0 00:03:16.608 EAL: Detected lcore 33 as core 11 on socket 0 00:03:16.608 EAL: Detected lcore 34 as core 12 on socket 0 00:03:16.608 EAL: Detected lcore 35 as core 13 on socket 0 00:03:16.608 EAL: Detected lcore 36 as core 0 on socket 1 00:03:16.608 EAL: Detected lcore 37 as core 1 on socket 1 00:03:16.608 EAL: Detected lcore 38 as core 2 on socket 1 00:03:16.608 EAL: Detected lcore 39 as core 3 on socket 1 00:03:16.608 EAL: Detected lcore 40 as core 4 on socket 1 00:03:16.608 EAL: Detected lcore 41 as core 5 on socket 1 00:03:16.608 EAL: Detected lcore 42 as core 8 on socket 1 00:03:16.608 EAL: Detected lcore 43 as core 9 on socket 1 00:03:16.608 EAL: Detected lcore 44 as core 10 on socket 1 00:03:16.608 EAL: Detected lcore 45 as core 11 on socket 1 00:03:16.608 EAL: Detected lcore 46 as core 12 on socket 1 00:03:16.608 EAL: Detected lcore 47 as core 13 on socket 1 00:03:16.608 EAL: Maximum logical cores by configuration: 128 00:03:16.608 EAL: Detected CPU lcores: 48 00:03:16.608 EAL: Detected NUMA nodes: 2 00:03:16.608 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:16.608 EAL: Detected shared linkage of DPDK 00:03:16.608 EAL: No shared files mode enabled, IPC will be disabled 00:03:16.608 EAL: Bus pci wants IOVA as 'DC' 00:03:16.608 EAL: Buses did not request a specific IOVA mode. 00:03:16.608 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:16.608 EAL: Selected IOVA mode 'VA' 00:03:16.608 EAL: No free 2048 kB hugepages reported on node 1 00:03:16.608 EAL: Probing VFIO support... 00:03:16.608 EAL: IOMMU type 1 (Type 1) is supported 00:03:16.608 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:16.608 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:16.608 EAL: VFIO support initialized 00:03:16.608 EAL: Ask a virtual area of 0x2e000 bytes 00:03:16.608 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:16.608 EAL: Setting up physically contiguous memory... 00:03:16.608 EAL: Setting maximum number of open files to 524288 00:03:16.608 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:16.608 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:16.608 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:16.608 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.608 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:16.608 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.608 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.608 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:16.608 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:16.608 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.608 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:16.608 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.608 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.608 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:16.608 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:16.608 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.608 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:16.608 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.608 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.608 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:16.608 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:16.608 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.608 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:16.608 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:16.608 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.608 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:16.608 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:16.608 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:16.608 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.608 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:16.608 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.608 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.608 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:16.608 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:16.608 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.608 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:16.608 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.608 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.608 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:16.608 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:16.608 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.608 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:16.608 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.608 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.608 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:16.608 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:16.608 EAL: Ask a virtual area of 0x61000 bytes 00:03:16.608 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:16.608 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:16.608 EAL: Ask a virtual area of 0x400000000 bytes 00:03:16.608 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:16.608 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:16.608 EAL: Hugepages will be freed exactly as allocated. 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: TSC frequency is ~2700000 KHz 00:03:16.609 EAL: Main lcore 0 is ready (tid=7fb8f617ba00;cpuset=[0]) 00:03:16.609 EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.609 EAL: Restoring previous memory policy: 0 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was expanded by 2MB 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:16.609 EAL: Mem event callback 'spdk:(nil)' registered 00:03:16.609 00:03:16.609 00:03:16.609 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.609 http://cunit.sourceforge.net/ 00:03:16.609 00:03:16.609 00:03:16.609 Suite: components_suite 00:03:16.609 Test: vtophys_malloc_test ...passed 00:03:16.609 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.609 EAL: Restoring previous memory policy: 4 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was expanded by 4MB 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was shrunk by 4MB 00:03:16.609 EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.609 EAL: Restoring previous memory policy: 4 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was expanded by 6MB 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was shrunk by 6MB 00:03:16.609 EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.609 EAL: Restoring previous memory policy: 4 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was expanded by 10MB 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was shrunk by 10MB 00:03:16.609 EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.609 EAL: Restoring previous memory policy: 4 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was expanded by 18MB 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was shrunk by 18MB 00:03:16.609 EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.609 EAL: Restoring previous memory policy: 4 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was expanded by 34MB 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was shrunk by 34MB 00:03:16.609 EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.609 EAL: Restoring previous memory policy: 4 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was expanded by 66MB 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was shrunk by 66MB 00:03:16.609 EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.609 EAL: Restoring previous memory policy: 4 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was expanded by 130MB 00:03:16.609 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.609 EAL: request: mp_malloc_sync 00:03:16.609 EAL: No shared files mode enabled, IPC is disabled 00:03:16.609 EAL: Heap on socket 0 was shrunk by 130MB 00:03:16.609 EAL: Trying to obtain current memory policy. 00:03:16.609 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:16.867 EAL: Restoring previous memory policy: 4 00:03:16.867 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.867 EAL: request: mp_malloc_sync 00:03:16.867 EAL: No shared files mode enabled, IPC is disabled 00:03:16.867 EAL: Heap on socket 0 was expanded by 258MB 00:03:16.867 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.867 EAL: request: mp_malloc_sync 00:03:16.867 EAL: No shared files mode enabled, IPC is disabled 00:03:16.867 EAL: Heap on socket 0 was shrunk by 258MB 00:03:16.867 EAL: Trying to obtain current memory policy. 00:03:16.867 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:17.124 EAL: Restoring previous memory policy: 4 00:03:17.124 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.124 EAL: request: mp_malloc_sync 00:03:17.124 EAL: No shared files mode enabled, IPC is disabled 00:03:17.124 EAL: Heap on socket 0 was expanded by 514MB 00:03:17.124 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.381 EAL: request: mp_malloc_sync 00:03:17.381 EAL: No shared files mode enabled, IPC is disabled 00:03:17.381 EAL: Heap on socket 0 was shrunk by 514MB 00:03:17.381 EAL: Trying to obtain current memory policy. 00:03:17.381 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:17.638 EAL: Restoring previous memory policy: 4 00:03:17.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.638 EAL: request: mp_malloc_sync 00:03:17.638 EAL: No shared files mode enabled, IPC is disabled 00:03:17.638 EAL: Heap on socket 0 was expanded by 1026MB 00:03:17.638 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.897 EAL: request: mp_malloc_sync 00:03:17.897 EAL: No shared files mode enabled, IPC is disabled 00:03:17.897 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:17.897 passed 00:03:17.897 00:03:17.897 Run Summary: Type Total Ran Passed Failed Inactive 00:03:17.897 suites 1 1 n/a 0 0 00:03:17.897 tests 2 2 2 0 0 00:03:17.897 asserts 497 497 497 0 n/a 00:03:17.897 00:03:17.897 Elapsed time = 1.382 seconds 00:03:17.897 EAL: Calling mem event callback 'spdk:(nil)' 00:03:17.897 EAL: request: mp_malloc_sync 00:03:17.897 EAL: No shared files mode enabled, IPC is disabled 00:03:17.897 EAL: Heap on socket 0 was shrunk by 2MB 00:03:17.897 EAL: No shared files mode enabled, IPC is disabled 00:03:17.897 EAL: No shared files mode enabled, IPC is disabled 00:03:17.897 EAL: No shared files mode enabled, IPC is disabled 00:03:17.897 00:03:17.897 real 0m1.509s 00:03:17.897 user 0m0.864s 00:03:17.897 sys 0m0.615s 00:03:17.897 10:43:34 env.env_vtophys -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:17.897 10:43:34 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:17.897 ************************************ 00:03:17.897 END TEST env_vtophys 00:03:17.897 ************************************ 00:03:17.897 10:43:34 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:17.897 10:43:34 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:17.897 10:43:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:17.897 10:43:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:18.155 ************************************ 00:03:18.155 START TEST env_pci 00:03:18.155 ************************************ 00:03:18.155 10:43:34 env.env_pci -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:18.155 00:03:18.155 00:03:18.155 CUnit - A unit testing framework for C - Version 2.1-3 00:03:18.155 http://cunit.sourceforge.net/ 00:03:18.155 00:03:18.155 00:03:18.155 Suite: pci 00:03:18.155 Test: pci_hook ...[2024-05-15 10:43:34.161550] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2666461 has claimed it 00:03:18.155 EAL: Cannot find device (10000:00:01.0) 00:03:18.155 EAL: Failed to attach device on primary process 00:03:18.155 passed 00:03:18.155 00:03:18.155 Run Summary: Type Total Ran Passed Failed Inactive 00:03:18.155 suites 1 1 n/a 0 0 00:03:18.155 tests 1 1 1 0 0 00:03:18.155 asserts 25 25 25 0 n/a 00:03:18.155 00:03:18.155 Elapsed time = 0.027 seconds 00:03:18.155 00:03:18.155 real 0m0.040s 00:03:18.155 user 0m0.011s 00:03:18.155 sys 0m0.029s 00:03:18.155 10:43:34 env.env_pci -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:18.155 10:43:34 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:18.155 ************************************ 00:03:18.155 END TEST env_pci 00:03:18.155 ************************************ 00:03:18.155 10:43:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:18.155 10:43:34 env -- env/env.sh@15 -- # uname 00:03:18.155 10:43:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:18.155 10:43:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:18.155 10:43:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:18.155 10:43:34 env -- common/autotest_common.sh@1097 -- # '[' 5 -le 1 ']' 00:03:18.155 10:43:34 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:18.155 10:43:34 env -- common/autotest_common.sh@10 -- # set +x 00:03:18.155 ************************************ 00:03:18.155 START TEST env_dpdk_post_init 00:03:18.155 ************************************ 00:03:18.155 10:43:34 env.env_dpdk_post_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:18.155 EAL: Detected CPU lcores: 48 00:03:18.155 EAL: Detected NUMA nodes: 2 00:03:18.155 EAL: Detected shared linkage of DPDK 00:03:18.155 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:18.155 EAL: Selected IOVA mode 'VA' 00:03:18.155 EAL: No free 2048 kB hugepages reported on node 1 00:03:18.155 EAL: VFIO support initialized 00:03:18.155 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:18.155 EAL: Using IOMMU type 1 (Type 1) 00:03:18.155 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:18.413 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:18.414 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:18.414 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:18.414 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:19.350 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:22.629 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:22.629 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:22.629 Starting DPDK initialization... 00:03:22.629 Starting SPDK post initialization... 00:03:22.629 SPDK NVMe probe 00:03:22.629 Attaching to 0000:88:00.0 00:03:22.629 Attached to 0000:88:00.0 00:03:22.629 Cleaning up... 00:03:22.629 00:03:22.629 real 0m4.408s 00:03:22.629 user 0m3.258s 00:03:22.629 sys 0m0.207s 00:03:22.629 10:43:38 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:22.629 10:43:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:22.629 ************************************ 00:03:22.629 END TEST env_dpdk_post_init 00:03:22.629 ************************************ 00:03:22.629 10:43:38 env -- env/env.sh@26 -- # uname 00:03:22.629 10:43:38 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:22.629 10:43:38 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.629 10:43:38 env -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:22.629 10:43:38 env -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.629 10:43:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.629 ************************************ 00:03:22.629 START TEST env_mem_callbacks 00:03:22.629 ************************************ 00:03:22.629 10:43:38 env.env_mem_callbacks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:22.629 EAL: Detected CPU lcores: 48 00:03:22.629 EAL: Detected NUMA nodes: 2 00:03:22.629 EAL: Detected shared linkage of DPDK 00:03:22.629 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:22.629 EAL: Selected IOVA mode 'VA' 00:03:22.629 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.629 EAL: VFIO support initialized 00:03:22.629 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:22.629 00:03:22.629 00:03:22.629 CUnit - A unit testing framework for C - Version 2.1-3 00:03:22.629 http://cunit.sourceforge.net/ 00:03:22.629 00:03:22.629 00:03:22.629 Suite: memory 00:03:22.629 Test: test ... 00:03:22.629 register 0x200000200000 2097152 00:03:22.629 malloc 3145728 00:03:22.629 register 0x200000400000 4194304 00:03:22.629 buf 0x200000500000 len 3145728 PASSED 00:03:22.629 malloc 64 00:03:22.629 buf 0x2000004fff40 len 64 PASSED 00:03:22.629 malloc 4194304 00:03:22.629 register 0x200000800000 6291456 00:03:22.629 buf 0x200000a00000 len 4194304 PASSED 00:03:22.629 free 0x200000500000 3145728 00:03:22.629 free 0x2000004fff40 64 00:03:22.629 unregister 0x200000400000 4194304 PASSED 00:03:22.629 free 0x200000a00000 4194304 00:03:22.629 unregister 0x200000800000 6291456 PASSED 00:03:22.629 malloc 8388608 00:03:22.629 register 0x200000400000 10485760 00:03:22.629 buf 0x200000600000 len 8388608 PASSED 00:03:22.629 free 0x200000600000 8388608 00:03:22.629 unregister 0x200000400000 10485760 PASSED 00:03:22.629 passed 00:03:22.629 00:03:22.629 Run Summary: Type Total Ran Passed Failed Inactive 00:03:22.629 suites 1 1 n/a 0 0 00:03:22.629 tests 1 1 1 0 0 00:03:22.629 asserts 15 15 15 0 n/a 00:03:22.629 00:03:22.629 Elapsed time = 0.005 seconds 00:03:22.629 00:03:22.629 real 0m0.055s 00:03:22.629 user 0m0.021s 00:03:22.629 sys 0m0.034s 00:03:22.629 10:43:38 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:22.629 10:43:38 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:22.629 ************************************ 00:03:22.629 END TEST env_mem_callbacks 00:03:22.629 ************************************ 00:03:22.629 00:03:22.629 real 0m6.478s 00:03:22.629 user 0m4.427s 00:03:22.629 sys 0m1.084s 00:03:22.629 10:43:38 env -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:22.629 10:43:38 env -- common/autotest_common.sh@10 -- # set +x 00:03:22.629 ************************************ 00:03:22.629 END TEST env 00:03:22.629 ************************************ 00:03:22.630 10:43:38 -- spdk/autotest.sh@178 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.630 10:43:38 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:22.630 10:43:38 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:22.630 10:43:38 -- common/autotest_common.sh@10 -- # set +x 00:03:22.630 ************************************ 00:03:22.630 START TEST rpc 00:03:22.630 ************************************ 00:03:22.630 10:43:38 rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:22.887 * Looking for test storage... 00:03:22.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:22.887 10:43:38 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2667176 00:03:22.887 10:43:38 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:22.888 10:43:38 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:22.888 10:43:38 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2667176 00:03:22.888 10:43:38 rpc -- common/autotest_common.sh@827 -- # '[' -z 2667176 ']' 00:03:22.888 10:43:38 rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:22.888 10:43:38 rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:22.888 10:43:38 rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:22.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:22.888 10:43:38 rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:22.888 10:43:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:22.888 [2024-05-15 10:43:38.940823] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:22.888 [2024-05-15 10:43:38.940916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667176 ] 00:03:22.888 EAL: No free 2048 kB hugepages reported on node 1 00:03:22.888 [2024-05-15 10:43:39.013413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.146 [2024-05-15 10:43:39.130609] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:23.146 [2024-05-15 10:43:39.130664] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2667176' to capture a snapshot of events at runtime. 00:03:23.146 [2024-05-15 10:43:39.130690] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:23.146 [2024-05-15 10:43:39.130709] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:23.146 [2024-05-15 10:43:39.130727] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2667176 for offline analysis/debug. 00:03:23.146 [2024-05-15 10:43:39.130776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:23.405 10:43:39 rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:23.405 10:43:39 rpc -- common/autotest_common.sh@860 -- # return 0 00:03:23.405 10:43:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.405 10:43:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.405 10:43:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:23.405 10:43:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:23.405 10:43:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:23.405 10:43:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:23.405 10:43:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.405 ************************************ 00:03:23.405 START TEST rpc_integrity 00:03:23.405 ************************************ 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:23.405 { 00:03:23.405 "name": "Malloc0", 00:03:23.405 "aliases": [ 00:03:23.405 "5d661517-2fb4-49e8-84ac-937498a2a6e1" 00:03:23.405 ], 00:03:23.405 "product_name": "Malloc disk", 00:03:23.405 "block_size": 512, 00:03:23.405 "num_blocks": 16384, 00:03:23.405 "uuid": "5d661517-2fb4-49e8-84ac-937498a2a6e1", 00:03:23.405 "assigned_rate_limits": { 00:03:23.405 "rw_ios_per_sec": 0, 00:03:23.405 "rw_mbytes_per_sec": 0, 00:03:23.405 "r_mbytes_per_sec": 0, 00:03:23.405 "w_mbytes_per_sec": 0 00:03:23.405 }, 00:03:23.405 "claimed": false, 00:03:23.405 "zoned": false, 00:03:23.405 "supported_io_types": { 00:03:23.405 "read": true, 00:03:23.405 "write": true, 00:03:23.405 "unmap": true, 00:03:23.405 "write_zeroes": true, 00:03:23.405 "flush": true, 00:03:23.405 "reset": true, 00:03:23.405 "compare": false, 00:03:23.405 "compare_and_write": false, 00:03:23.405 "abort": true, 00:03:23.405 "nvme_admin": false, 00:03:23.405 "nvme_io": false 00:03:23.405 }, 00:03:23.405 "memory_domains": [ 00:03:23.405 { 00:03:23.405 "dma_device_id": "system", 00:03:23.405 "dma_device_type": 1 00:03:23.405 }, 00:03:23.405 { 00:03:23.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.405 "dma_device_type": 2 00:03:23.405 } 00:03:23.405 ], 00:03:23.405 "driver_specific": {} 00:03:23.405 } 00:03:23.405 ]' 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.405 [2024-05-15 10:43:39.543757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:23.405 [2024-05-15 10:43:39.543802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:23.405 [2024-05-15 10:43:39.543835] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x248eb50 00:03:23.405 [2024-05-15 10:43:39.543862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:23.405 [2024-05-15 10:43:39.545386] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:23.405 [2024-05-15 10:43:39.545416] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:23.405 Passthru0 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.405 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.405 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:23.405 { 00:03:23.405 "name": "Malloc0", 00:03:23.405 "aliases": [ 00:03:23.405 "5d661517-2fb4-49e8-84ac-937498a2a6e1" 00:03:23.405 ], 00:03:23.405 "product_name": "Malloc disk", 00:03:23.405 "block_size": 512, 00:03:23.405 "num_blocks": 16384, 00:03:23.405 "uuid": "5d661517-2fb4-49e8-84ac-937498a2a6e1", 00:03:23.405 "assigned_rate_limits": { 00:03:23.405 "rw_ios_per_sec": 0, 00:03:23.405 "rw_mbytes_per_sec": 0, 00:03:23.405 "r_mbytes_per_sec": 0, 00:03:23.405 "w_mbytes_per_sec": 0 00:03:23.405 }, 00:03:23.405 "claimed": true, 00:03:23.405 "claim_type": "exclusive_write", 00:03:23.405 "zoned": false, 00:03:23.405 "supported_io_types": { 00:03:23.405 "read": true, 00:03:23.405 "write": true, 00:03:23.405 "unmap": true, 00:03:23.405 "write_zeroes": true, 00:03:23.405 "flush": true, 00:03:23.405 "reset": true, 00:03:23.405 "compare": false, 00:03:23.405 "compare_and_write": false, 00:03:23.405 "abort": true, 00:03:23.405 "nvme_admin": false, 00:03:23.405 "nvme_io": false 00:03:23.405 }, 00:03:23.405 "memory_domains": [ 00:03:23.405 { 00:03:23.405 "dma_device_id": "system", 00:03:23.405 "dma_device_type": 1 00:03:23.405 }, 00:03:23.405 { 00:03:23.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.405 "dma_device_type": 2 00:03:23.405 } 00:03:23.405 ], 00:03:23.405 "driver_specific": {} 00:03:23.405 }, 00:03:23.405 { 00:03:23.405 "name": "Passthru0", 00:03:23.405 "aliases": [ 00:03:23.405 "3f6adb30-03d8-514b-8e49-4300f37ace59" 00:03:23.405 ], 00:03:23.405 "product_name": "passthru", 00:03:23.405 "block_size": 512, 00:03:23.406 "num_blocks": 16384, 00:03:23.406 "uuid": "3f6adb30-03d8-514b-8e49-4300f37ace59", 00:03:23.406 "assigned_rate_limits": { 00:03:23.406 "rw_ios_per_sec": 0, 00:03:23.406 "rw_mbytes_per_sec": 0, 00:03:23.406 "r_mbytes_per_sec": 0, 00:03:23.406 "w_mbytes_per_sec": 0 00:03:23.406 }, 00:03:23.406 "claimed": false, 00:03:23.406 "zoned": false, 00:03:23.406 "supported_io_types": { 00:03:23.406 "read": true, 00:03:23.406 "write": true, 00:03:23.406 "unmap": true, 00:03:23.406 "write_zeroes": true, 00:03:23.406 "flush": true, 00:03:23.406 "reset": true, 00:03:23.406 "compare": false, 00:03:23.406 "compare_and_write": false, 00:03:23.406 "abort": true, 00:03:23.406 "nvme_admin": false, 00:03:23.406 "nvme_io": false 00:03:23.406 }, 00:03:23.406 "memory_domains": [ 00:03:23.406 { 00:03:23.406 "dma_device_id": "system", 00:03:23.406 "dma_device_type": 1 00:03:23.406 }, 00:03:23.406 { 00:03:23.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.406 "dma_device_type": 2 00:03:23.406 } 00:03:23.406 ], 00:03:23.406 "driver_specific": { 00:03:23.406 "passthru": { 00:03:23.406 "name": "Passthru0", 00:03:23.406 "base_bdev_name": "Malloc0" 00:03:23.406 } 00:03:23.406 } 00:03:23.406 } 00:03:23.406 ]' 00:03:23.406 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:23.406 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:23.406 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.406 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.406 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.406 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.406 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:23.406 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:23.664 10:43:39 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:23.664 00:03:23.664 real 0m0.231s 00:03:23.664 user 0m0.152s 00:03:23.664 sys 0m0.021s 00:03:23.664 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:23.664 10:43:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.664 ************************************ 00:03:23.664 END TEST rpc_integrity 00:03:23.664 ************************************ 00:03:23.664 10:43:39 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:23.664 10:43:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:23.664 10:43:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:23.664 10:43:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.664 ************************************ 00:03:23.664 START TEST rpc_plugins 00:03:23.664 ************************************ 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@1121 -- # rpc_plugins 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:23.664 { 00:03:23.664 "name": "Malloc1", 00:03:23.664 "aliases": [ 00:03:23.664 "e3436c29-ddfd-4b77-bd35-e80b6e5d5d2a" 00:03:23.664 ], 00:03:23.664 "product_name": "Malloc disk", 00:03:23.664 "block_size": 4096, 00:03:23.664 "num_blocks": 256, 00:03:23.664 "uuid": "e3436c29-ddfd-4b77-bd35-e80b6e5d5d2a", 00:03:23.664 "assigned_rate_limits": { 00:03:23.664 "rw_ios_per_sec": 0, 00:03:23.664 "rw_mbytes_per_sec": 0, 00:03:23.664 "r_mbytes_per_sec": 0, 00:03:23.664 "w_mbytes_per_sec": 0 00:03:23.664 }, 00:03:23.664 "claimed": false, 00:03:23.664 "zoned": false, 00:03:23.664 "supported_io_types": { 00:03:23.664 "read": true, 00:03:23.664 "write": true, 00:03:23.664 "unmap": true, 00:03:23.664 "write_zeroes": true, 00:03:23.664 "flush": true, 00:03:23.664 "reset": true, 00:03:23.664 "compare": false, 00:03:23.664 "compare_and_write": false, 00:03:23.664 "abort": true, 00:03:23.664 "nvme_admin": false, 00:03:23.664 "nvme_io": false 00:03:23.664 }, 00:03:23.664 "memory_domains": [ 00:03:23.664 { 00:03:23.664 "dma_device_id": "system", 00:03:23.664 "dma_device_type": 1 00:03:23.664 }, 00:03:23.664 { 00:03:23.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:23.664 "dma_device_type": 2 00:03:23.664 } 00:03:23.664 ], 00:03:23.664 "driver_specific": {} 00:03:23.664 } 00:03:23.664 ]' 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:23.664 10:43:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:23.664 00:03:23.664 real 0m0.118s 00:03:23.664 user 0m0.077s 00:03:23.664 sys 0m0.012s 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:23.664 10:43:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:23.664 ************************************ 00:03:23.664 END TEST rpc_plugins 00:03:23.664 ************************************ 00:03:23.664 10:43:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:23.664 10:43:39 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:23.664 10:43:39 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:23.664 10:43:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.664 ************************************ 00:03:23.664 START TEST rpc_trace_cmd_test 00:03:23.664 ************************************ 00:03:23.664 10:43:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1121 -- # rpc_trace_cmd_test 00:03:23.664 10:43:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:23.664 10:43:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:23.664 10:43:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.664 10:43:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:23.923 10:43:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.923 10:43:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:23.923 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2667176", 00:03:23.923 "tpoint_group_mask": "0x8", 00:03:23.923 "iscsi_conn": { 00:03:23.923 "mask": "0x2", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "scsi": { 00:03:23.923 "mask": "0x4", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "bdev": { 00:03:23.923 "mask": "0x8", 00:03:23.923 "tpoint_mask": "0xffffffffffffffff" 00:03:23.923 }, 00:03:23.923 "nvmf_rdma": { 00:03:23.923 "mask": "0x10", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "nvmf_tcp": { 00:03:23.923 "mask": "0x20", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "ftl": { 00:03:23.923 "mask": "0x40", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "blobfs": { 00:03:23.923 "mask": "0x80", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "dsa": { 00:03:23.923 "mask": "0x200", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "thread": { 00:03:23.923 "mask": "0x400", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "nvme_pcie": { 00:03:23.923 "mask": "0x800", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "iaa": { 00:03:23.923 "mask": "0x1000", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "nvme_tcp": { 00:03:23.923 "mask": "0x2000", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "bdev_nvme": { 00:03:23.923 "mask": "0x4000", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 }, 00:03:23.923 "sock": { 00:03:23.923 "mask": "0x8000", 00:03:23.923 "tpoint_mask": "0x0" 00:03:23.923 } 00:03:23.923 }' 00:03:23.923 10:43:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:23.923 10:43:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:23.923 10:43:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:23.923 10:43:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:23.923 10:43:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:23.923 10:43:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:23.923 10:43:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:23.923 10:43:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:23.923 10:43:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:23.923 10:43:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:23.923 00:03:23.923 real 0m0.195s 00:03:23.923 user 0m0.168s 00:03:23.923 sys 0m0.018s 00:03:23.923 10:43:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:23.923 10:43:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:23.923 ************************************ 00:03:23.923 END TEST rpc_trace_cmd_test 00:03:23.923 ************************************ 00:03:23.923 10:43:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:23.923 10:43:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:23.923 10:43:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:23.923 10:43:40 rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:23.923 10:43:40 rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:23.923 10:43:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.923 ************************************ 00:03:23.923 START TEST rpc_daemon_integrity 00:03:23.923 ************************************ 00:03:23.923 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1121 -- # rpc_integrity 00:03:23.923 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:23.923 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:23.923 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:23.923 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:23.923 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:23.923 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:24.181 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:24.181 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:24.181 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:24.181 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.181 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:24.181 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:24.182 { 00:03:24.182 "name": "Malloc2", 00:03:24.182 "aliases": [ 00:03:24.182 "eef88e2b-e0f0-4048-ade0-14e50ff4da80" 00:03:24.182 ], 00:03:24.182 "product_name": "Malloc disk", 00:03:24.182 "block_size": 512, 00:03:24.182 "num_blocks": 16384, 00:03:24.182 "uuid": "eef88e2b-e0f0-4048-ade0-14e50ff4da80", 00:03:24.182 "assigned_rate_limits": { 00:03:24.182 "rw_ios_per_sec": 0, 00:03:24.182 "rw_mbytes_per_sec": 0, 00:03:24.182 "r_mbytes_per_sec": 0, 00:03:24.182 "w_mbytes_per_sec": 0 00:03:24.182 }, 00:03:24.182 "claimed": false, 00:03:24.182 "zoned": false, 00:03:24.182 "supported_io_types": { 00:03:24.182 "read": true, 00:03:24.182 "write": true, 00:03:24.182 "unmap": true, 00:03:24.182 "write_zeroes": true, 00:03:24.182 "flush": true, 00:03:24.182 "reset": true, 00:03:24.182 "compare": false, 00:03:24.182 "compare_and_write": false, 00:03:24.182 "abort": true, 00:03:24.182 "nvme_admin": false, 00:03:24.182 "nvme_io": false 00:03:24.182 }, 00:03:24.182 "memory_domains": [ 00:03:24.182 { 00:03:24.182 "dma_device_id": "system", 00:03:24.182 "dma_device_type": 1 00:03:24.182 }, 00:03:24.182 { 00:03:24.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.182 "dma_device_type": 2 00:03:24.182 } 00:03:24.182 ], 00:03:24.182 "driver_specific": {} 00:03:24.182 } 00:03:24.182 ]' 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.182 [2024-05-15 10:43:40.230550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:24.182 [2024-05-15 10:43:40.230595] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:24.182 [2024-05-15 10:43:40.230630] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2492260 00:03:24.182 [2024-05-15 10:43:40.230658] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:24.182 [2024-05-15 10:43:40.232107] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:24.182 [2024-05-15 10:43:40.232134] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:24.182 Passthru0 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:24.182 { 00:03:24.182 "name": "Malloc2", 00:03:24.182 "aliases": [ 00:03:24.182 "eef88e2b-e0f0-4048-ade0-14e50ff4da80" 00:03:24.182 ], 00:03:24.182 "product_name": "Malloc disk", 00:03:24.182 "block_size": 512, 00:03:24.182 "num_blocks": 16384, 00:03:24.182 "uuid": "eef88e2b-e0f0-4048-ade0-14e50ff4da80", 00:03:24.182 "assigned_rate_limits": { 00:03:24.182 "rw_ios_per_sec": 0, 00:03:24.182 "rw_mbytes_per_sec": 0, 00:03:24.182 "r_mbytes_per_sec": 0, 00:03:24.182 "w_mbytes_per_sec": 0 00:03:24.182 }, 00:03:24.182 "claimed": true, 00:03:24.182 "claim_type": "exclusive_write", 00:03:24.182 "zoned": false, 00:03:24.182 "supported_io_types": { 00:03:24.182 "read": true, 00:03:24.182 "write": true, 00:03:24.182 "unmap": true, 00:03:24.182 "write_zeroes": true, 00:03:24.182 "flush": true, 00:03:24.182 "reset": true, 00:03:24.182 "compare": false, 00:03:24.182 "compare_and_write": false, 00:03:24.182 "abort": true, 00:03:24.182 "nvme_admin": false, 00:03:24.182 "nvme_io": false 00:03:24.182 }, 00:03:24.182 "memory_domains": [ 00:03:24.182 { 00:03:24.182 "dma_device_id": "system", 00:03:24.182 "dma_device_type": 1 00:03:24.182 }, 00:03:24.182 { 00:03:24.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.182 "dma_device_type": 2 00:03:24.182 } 00:03:24.182 ], 00:03:24.182 "driver_specific": {} 00:03:24.182 }, 00:03:24.182 { 00:03:24.182 "name": "Passthru0", 00:03:24.182 "aliases": [ 00:03:24.182 "3e7a1815-7c23-53fb-8cfa-6436f8f7fe1a" 00:03:24.182 ], 00:03:24.182 "product_name": "passthru", 00:03:24.182 "block_size": 512, 00:03:24.182 "num_blocks": 16384, 00:03:24.182 "uuid": "3e7a1815-7c23-53fb-8cfa-6436f8f7fe1a", 00:03:24.182 "assigned_rate_limits": { 00:03:24.182 "rw_ios_per_sec": 0, 00:03:24.182 "rw_mbytes_per_sec": 0, 00:03:24.182 "r_mbytes_per_sec": 0, 00:03:24.182 "w_mbytes_per_sec": 0 00:03:24.182 }, 00:03:24.182 "claimed": false, 00:03:24.182 "zoned": false, 00:03:24.182 "supported_io_types": { 00:03:24.182 "read": true, 00:03:24.182 "write": true, 00:03:24.182 "unmap": true, 00:03:24.182 "write_zeroes": true, 00:03:24.182 "flush": true, 00:03:24.182 "reset": true, 00:03:24.182 "compare": false, 00:03:24.182 "compare_and_write": false, 00:03:24.182 "abort": true, 00:03:24.182 "nvme_admin": false, 00:03:24.182 "nvme_io": false 00:03:24.182 }, 00:03:24.182 "memory_domains": [ 00:03:24.182 { 00:03:24.182 "dma_device_id": "system", 00:03:24.182 "dma_device_type": 1 00:03:24.182 }, 00:03:24.182 { 00:03:24.182 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:24.182 "dma_device_type": 2 00:03:24.182 } 00:03:24.182 ], 00:03:24.182 "driver_specific": { 00:03:24.182 "passthru": { 00:03:24.182 "name": "Passthru0", 00:03:24.182 "base_bdev_name": "Malloc2" 00:03:24.182 } 00:03:24.182 } 00:03:24.182 } 00:03:24.182 ]' 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:24.182 00:03:24.182 real 0m0.222s 00:03:24.182 user 0m0.150s 00:03:24.182 sys 0m0.018s 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.182 10:43:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:24.182 ************************************ 00:03:24.182 END TEST rpc_daemon_integrity 00:03:24.182 ************************************ 00:03:24.182 10:43:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:24.182 10:43:40 rpc -- rpc/rpc.sh@84 -- # killprocess 2667176 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@946 -- # '[' -z 2667176 ']' 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@950 -- # kill -0 2667176 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@951 -- # uname 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2667176 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2667176' 00:03:24.182 killing process with pid 2667176 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@965 -- # kill 2667176 00:03:24.182 10:43:40 rpc -- common/autotest_common.sh@970 -- # wait 2667176 00:03:24.747 00:03:24.747 real 0m2.018s 00:03:24.747 user 0m2.510s 00:03:24.747 sys 0m0.607s 00:03:24.747 10:43:40 rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:24.747 10:43:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:24.747 ************************************ 00:03:24.747 END TEST rpc 00:03:24.747 ************************************ 00:03:24.747 10:43:40 -- spdk/autotest.sh@179 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:24.747 10:43:40 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.747 10:43:40 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.747 10:43:40 -- common/autotest_common.sh@10 -- # set +x 00:03:24.747 ************************************ 00:03:24.747 START TEST skip_rpc 00:03:24.747 ************************************ 00:03:24.747 10:43:40 skip_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:24.747 * Looking for test storage... 00:03:24.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:24.747 10:43:40 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:24.747 10:43:40 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:24.747 10:43:40 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:24.747 10:43:40 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:24.748 10:43:40 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:24.748 10:43:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:25.005 ************************************ 00:03:25.005 START TEST skip_rpc 00:03:25.005 ************************************ 00:03:25.005 10:43:40 skip_rpc.skip_rpc -- common/autotest_common.sh@1121 -- # test_skip_rpc 00:03:25.005 10:43:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2667554 00:03:25.005 10:43:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:25.005 10:43:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:25.005 10:43:40 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:25.005 [2024-05-15 10:43:41.039122] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:25.005 [2024-05-15 10:43:41.039196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2667554 ] 00:03:25.005 EAL: No free 2048 kB hugepages reported on node 1 00:03:25.005 [2024-05-15 10:43:41.110235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:25.005 [2024-05-15 10:43:41.229504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:30.293 10:43:45 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2667554 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@946 -- # '[' -z 2667554 ']' 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # kill -0 2667554 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # uname 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2667554 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2667554' 00:03:30.293 killing process with pid 2667554 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # kill 2667554 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # wait 2667554 00:03:30.293 00:03:30.293 real 0m5.483s 00:03:30.293 user 0m5.164s 00:03:30.293 sys 0m0.325s 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:30.293 10:43:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.293 ************************************ 00:03:30.293 END TEST skip_rpc 00:03:30.293 ************************************ 00:03:30.293 10:43:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:30.293 10:43:46 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:30.293 10:43:46 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:30.293 10:43:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:30.551 ************************************ 00:03:30.551 START TEST skip_rpc_with_json 00:03:30.551 ************************************ 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_json 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2668250 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2668250 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@827 -- # '[' -z 2668250 ']' 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:30.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:30.551 10:43:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:30.551 [2024-05-15 10:43:46.587389] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:30.551 [2024-05-15 10:43:46.587481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2668250 ] 00:03:30.552 EAL: No free 2048 kB hugepages reported on node 1 00:03:30.552 [2024-05-15 10:43:46.659362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:30.552 [2024-05-15 10:43:46.776625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # return 0 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.485 [2024-05-15 10:43:47.534456] nvmf_rpc.c:2531:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:31.485 request: 00:03:31.485 { 00:03:31.485 "trtype": "tcp", 00:03:31.485 "method": "nvmf_get_transports", 00:03:31.485 "req_id": 1 00:03:31.485 } 00:03:31.485 Got JSON-RPC error response 00:03:31.485 response: 00:03:31.485 { 00:03:31.485 "code": -19, 00:03:31.485 "message": "No such device" 00:03:31.485 } 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.485 [2024-05-15 10:43:47.542579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:31.485 10:43:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:31.485 { 00:03:31.485 "subsystems": [ 00:03:31.485 { 00:03:31.485 "subsystem": "vfio_user_target", 00:03:31.485 "config": null 00:03:31.485 }, 00:03:31.485 { 00:03:31.485 "subsystem": "keyring", 00:03:31.485 "config": [] 00:03:31.485 }, 00:03:31.485 { 00:03:31.485 "subsystem": "iobuf", 00:03:31.485 "config": [ 00:03:31.485 { 00:03:31.485 "method": "iobuf_set_options", 00:03:31.485 "params": { 00:03:31.485 "small_pool_count": 8192, 00:03:31.485 "large_pool_count": 1024, 00:03:31.485 "small_bufsize": 8192, 00:03:31.485 "large_bufsize": 135168 00:03:31.485 } 00:03:31.485 } 00:03:31.485 ] 00:03:31.485 }, 00:03:31.485 { 00:03:31.485 "subsystem": "sock", 00:03:31.485 "config": [ 00:03:31.485 { 00:03:31.485 "method": "sock_set_default_impl", 00:03:31.485 "params": { 00:03:31.485 "impl_name": "posix" 00:03:31.485 } 00:03:31.485 }, 00:03:31.485 { 00:03:31.485 "method": "sock_impl_set_options", 00:03:31.485 "params": { 00:03:31.485 "impl_name": "ssl", 00:03:31.485 "recv_buf_size": 4096, 00:03:31.485 "send_buf_size": 4096, 00:03:31.485 "enable_recv_pipe": true, 00:03:31.485 "enable_quickack": false, 00:03:31.485 "enable_placement_id": 0, 00:03:31.485 "enable_zerocopy_send_server": true, 00:03:31.485 "enable_zerocopy_send_client": false, 00:03:31.485 "zerocopy_threshold": 0, 00:03:31.485 "tls_version": 0, 00:03:31.485 "enable_ktls": false 00:03:31.485 } 00:03:31.485 }, 00:03:31.485 { 00:03:31.485 "method": "sock_impl_set_options", 00:03:31.485 "params": { 00:03:31.485 "impl_name": "posix", 00:03:31.485 "recv_buf_size": 2097152, 00:03:31.485 "send_buf_size": 2097152, 00:03:31.485 "enable_recv_pipe": true, 00:03:31.485 "enable_quickack": false, 00:03:31.485 "enable_placement_id": 0, 00:03:31.485 "enable_zerocopy_send_server": true, 00:03:31.485 "enable_zerocopy_send_client": false, 00:03:31.485 "zerocopy_threshold": 0, 00:03:31.485 "tls_version": 0, 00:03:31.486 "enable_ktls": false 00:03:31.486 } 00:03:31.486 } 00:03:31.486 ] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "vmd", 00:03:31.486 "config": [] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "accel", 00:03:31.486 "config": [ 00:03:31.486 { 00:03:31.486 "method": "accel_set_options", 00:03:31.486 "params": { 00:03:31.486 "small_cache_size": 128, 00:03:31.486 "large_cache_size": 16, 00:03:31.486 "task_count": 2048, 00:03:31.486 "sequence_count": 2048, 00:03:31.486 "buf_count": 2048 00:03:31.486 } 00:03:31.486 } 00:03:31.486 ] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "bdev", 00:03:31.486 "config": [ 00:03:31.486 { 00:03:31.486 "method": "bdev_set_options", 00:03:31.486 "params": { 00:03:31.486 "bdev_io_pool_size": 65535, 00:03:31.486 "bdev_io_cache_size": 256, 00:03:31.486 "bdev_auto_examine": true, 00:03:31.486 "iobuf_small_cache_size": 128, 00:03:31.486 "iobuf_large_cache_size": 16 00:03:31.486 } 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "method": "bdev_raid_set_options", 00:03:31.486 "params": { 00:03:31.486 "process_window_size_kb": 1024 00:03:31.486 } 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "method": "bdev_iscsi_set_options", 00:03:31.486 "params": { 00:03:31.486 "timeout_sec": 30 00:03:31.486 } 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "method": "bdev_nvme_set_options", 00:03:31.486 "params": { 00:03:31.486 "action_on_timeout": "none", 00:03:31.486 "timeout_us": 0, 00:03:31.486 "timeout_admin_us": 0, 00:03:31.486 "keep_alive_timeout_ms": 10000, 00:03:31.486 "arbitration_burst": 0, 00:03:31.486 "low_priority_weight": 0, 00:03:31.486 "medium_priority_weight": 0, 00:03:31.486 "high_priority_weight": 0, 00:03:31.486 "nvme_adminq_poll_period_us": 10000, 00:03:31.486 "nvme_ioq_poll_period_us": 0, 00:03:31.486 "io_queue_requests": 0, 00:03:31.486 "delay_cmd_submit": true, 00:03:31.486 "transport_retry_count": 4, 00:03:31.486 "bdev_retry_count": 3, 00:03:31.486 "transport_ack_timeout": 0, 00:03:31.486 "ctrlr_loss_timeout_sec": 0, 00:03:31.486 "reconnect_delay_sec": 0, 00:03:31.486 "fast_io_fail_timeout_sec": 0, 00:03:31.486 "disable_auto_failback": false, 00:03:31.486 "generate_uuids": false, 00:03:31.486 "transport_tos": 0, 00:03:31.486 "nvme_error_stat": false, 00:03:31.486 "rdma_srq_size": 0, 00:03:31.486 "io_path_stat": false, 00:03:31.486 "allow_accel_sequence": false, 00:03:31.486 "rdma_max_cq_size": 0, 00:03:31.486 "rdma_cm_event_timeout_ms": 0, 00:03:31.486 "dhchap_digests": [ 00:03:31.486 "sha256", 00:03:31.486 "sha384", 00:03:31.486 "sha512" 00:03:31.486 ], 00:03:31.486 "dhchap_dhgroups": [ 00:03:31.486 "null", 00:03:31.486 "ffdhe2048", 00:03:31.486 "ffdhe3072", 00:03:31.486 "ffdhe4096", 00:03:31.486 "ffdhe6144", 00:03:31.486 "ffdhe8192" 00:03:31.486 ] 00:03:31.486 } 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "method": "bdev_nvme_set_hotplug", 00:03:31.486 "params": { 00:03:31.486 "period_us": 100000, 00:03:31.486 "enable": false 00:03:31.486 } 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "method": "bdev_wait_for_examine" 00:03:31.486 } 00:03:31.486 ] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "scsi", 00:03:31.486 "config": null 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "scheduler", 00:03:31.486 "config": [ 00:03:31.486 { 00:03:31.486 "method": "framework_set_scheduler", 00:03:31.486 "params": { 00:03:31.486 "name": "static" 00:03:31.486 } 00:03:31.486 } 00:03:31.486 ] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "vhost_scsi", 00:03:31.486 "config": [] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "vhost_blk", 00:03:31.486 "config": [] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "ublk", 00:03:31.486 "config": [] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "nbd", 00:03:31.486 "config": [] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "nvmf", 00:03:31.486 "config": [ 00:03:31.486 { 00:03:31.486 "method": "nvmf_set_config", 00:03:31.486 "params": { 00:03:31.486 "discovery_filter": "match_any", 00:03:31.486 "admin_cmd_passthru": { 00:03:31.486 "identify_ctrlr": false 00:03:31.486 } 00:03:31.486 } 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "method": "nvmf_set_max_subsystems", 00:03:31.486 "params": { 00:03:31.486 "max_subsystems": 1024 00:03:31.486 } 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "method": "nvmf_set_crdt", 00:03:31.486 "params": { 00:03:31.486 "crdt1": 0, 00:03:31.486 "crdt2": 0, 00:03:31.486 "crdt3": 0 00:03:31.486 } 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "method": "nvmf_create_transport", 00:03:31.486 "params": { 00:03:31.486 "trtype": "TCP", 00:03:31.486 "max_queue_depth": 128, 00:03:31.486 "max_io_qpairs_per_ctrlr": 127, 00:03:31.486 "in_capsule_data_size": 4096, 00:03:31.486 "max_io_size": 131072, 00:03:31.486 "io_unit_size": 131072, 00:03:31.486 "max_aq_depth": 128, 00:03:31.486 "num_shared_buffers": 511, 00:03:31.486 "buf_cache_size": 4294967295, 00:03:31.486 "dif_insert_or_strip": false, 00:03:31.486 "zcopy": false, 00:03:31.486 "c2h_success": true, 00:03:31.486 "sock_priority": 0, 00:03:31.486 "abort_timeout_sec": 1, 00:03:31.486 "ack_timeout": 0, 00:03:31.486 "data_wr_pool_size": 0 00:03:31.486 } 00:03:31.486 } 00:03:31.486 ] 00:03:31.486 }, 00:03:31.486 { 00:03:31.486 "subsystem": "iscsi", 00:03:31.486 "config": [ 00:03:31.486 { 00:03:31.486 "method": "iscsi_set_options", 00:03:31.486 "params": { 00:03:31.486 "node_base": "iqn.2016-06.io.spdk", 00:03:31.486 "max_sessions": 128, 00:03:31.486 "max_connections_per_session": 2, 00:03:31.486 "max_queue_depth": 64, 00:03:31.486 "default_time2wait": 2, 00:03:31.486 "default_time2retain": 20, 00:03:31.486 "first_burst_length": 8192, 00:03:31.486 "immediate_data": true, 00:03:31.486 "allow_duplicated_isid": false, 00:03:31.486 "error_recovery_level": 0, 00:03:31.486 "nop_timeout": 60, 00:03:31.486 "nop_in_interval": 30, 00:03:31.486 "disable_chap": false, 00:03:31.486 "require_chap": false, 00:03:31.486 "mutual_chap": false, 00:03:31.486 "chap_group": 0, 00:03:31.486 "max_large_datain_per_connection": 64, 00:03:31.486 "max_r2t_per_connection": 4, 00:03:31.486 "pdu_pool_size": 36864, 00:03:31.486 "immediate_data_pool_size": 16384, 00:03:31.486 "data_out_pool_size": 2048 00:03:31.486 } 00:03:31.486 } 00:03:31.486 ] 00:03:31.486 } 00:03:31.486 ] 00:03:31.486 } 00:03:31.486 10:43:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:31.486 10:43:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2668250 00:03:31.486 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2668250 ']' 00:03:31.486 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2668250 00:03:31.487 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:31.487 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:31.487 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2668250 00:03:31.743 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:31.743 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:31.743 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2668250' 00:03:31.743 killing process with pid 2668250 00:03:31.743 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2668250 00:03:31.743 10:43:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2668250 00:03:32.000 10:43:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2668512 00:03:32.000 10:43:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:32.000 10:43:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2668512 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@946 -- # '[' -z 2668512 ']' 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # kill -0 2668512 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # uname 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2668512 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2668512' 00:03:37.258 killing process with pid 2668512 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # kill 2668512 00:03:37.258 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # wait 2668512 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:37.516 00:03:37.516 real 0m7.125s 00:03:37.516 user 0m6.891s 00:03:37.516 sys 0m0.755s 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:37.516 ************************************ 00:03:37.516 END TEST skip_rpc_with_json 00:03:37.516 ************************************ 00:03:37.516 10:43:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:37.516 10:43:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:37.516 10:43:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:37.516 10:43:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.516 ************************************ 00:03:37.516 START TEST skip_rpc_with_delay 00:03:37.516 ************************************ 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1121 -- # test_skip_rpc_with_delay 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.516 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.517 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:37.517 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.517 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:37.517 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.517 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:37.517 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:37.517 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:37.517 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:37.775 [2024-05-15 10:43:53.763867] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:37.775 [2024-05-15 10:43:53.764002] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:03:37.775 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:03:37.775 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:37.775 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:37.775 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:37.775 00:03:37.775 real 0m0.064s 00:03:37.775 user 0m0.041s 00:03:37.775 sys 0m0.022s 00:03:37.775 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:37.775 10:43:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:37.775 ************************************ 00:03:37.775 END TEST skip_rpc_with_delay 00:03:37.775 ************************************ 00:03:37.775 10:43:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:37.775 10:43:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:37.775 10:43:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:37.775 10:43:53 skip_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:37.775 10:43:53 skip_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:37.775 10:43:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:37.775 ************************************ 00:03:37.775 START TEST exit_on_failed_rpc_init 00:03:37.775 ************************************ 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1121 -- # test_exit_on_failed_rpc_init 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2669230 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2669230 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@827 -- # '[' -z 2669230 ']' 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:37.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:37.775 10:43:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:37.775 [2024-05-15 10:43:53.883085] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:37.775 [2024-05-15 10:43:53.883167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669230 ] 00:03:37.775 EAL: No free 2048 kB hugepages reported on node 1 00:03:37.775 [2024-05-15 10:43:53.957180] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.033 [2024-05-15 10:43:54.078865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # return 0 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:38.292 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:38.292 [2024-05-15 10:43:54.399274] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:38.292 [2024-05-15 10:43:54.399364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669240 ] 00:03:38.292 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.292 [2024-05-15 10:43:54.477351] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.549 [2024-05-15 10:43:54.599515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:38.549 [2024-05-15 10:43:54.599629] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:38.549 [2024-05-15 10:43:54.599650] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:38.549 [2024-05-15 10:43:54.599664] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2669230 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@946 -- # '[' -z 2669230 ']' 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # kill -0 2669230 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # uname 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2669230 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2669230' 00:03:38.549 killing process with pid 2669230 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # kill 2669230 00:03:38.549 10:43:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # wait 2669230 00:03:39.114 00:03:39.114 real 0m1.386s 00:03:39.114 user 0m1.563s 00:03:39.114 sys 0m0.489s 00:03:39.114 10:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:39.114 10:43:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:39.114 ************************************ 00:03:39.114 END TEST exit_on_failed_rpc_init 00:03:39.114 ************************************ 00:03:39.114 10:43:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:39.115 00:03:39.115 real 0m14.330s 00:03:39.115 user 0m13.779s 00:03:39.115 sys 0m1.750s 00:03:39.115 10:43:55 skip_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:39.115 10:43:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:39.115 ************************************ 00:03:39.115 END TEST skip_rpc 00:03:39.115 ************************************ 00:03:39.115 10:43:55 -- spdk/autotest.sh@180 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:39.115 10:43:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.115 10:43:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.115 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:03:39.115 ************************************ 00:03:39.115 START TEST rpc_client 00:03:39.115 ************************************ 00:03:39.115 10:43:55 rpc_client -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:39.115 * Looking for test storage... 00:03:39.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:39.115 10:43:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:39.374 OK 00:03:39.374 10:43:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:39.374 00:03:39.374 real 0m0.064s 00:03:39.374 user 0m0.029s 00:03:39.374 sys 0m0.041s 00:03:39.374 10:43:55 rpc_client -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:39.374 10:43:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 ************************************ 00:03:39.374 END TEST rpc_client 00:03:39.374 ************************************ 00:03:39.374 10:43:55 -- spdk/autotest.sh@181 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:39.374 10:43:55 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:39.374 10:43:55 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:39.374 10:43:55 -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 ************************************ 00:03:39.374 START TEST json_config 00:03:39.374 ************************************ 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:39.374 10:43:55 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:39.374 10:43:55 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:39.374 10:43:55 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:39.374 10:43:55 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.374 10:43:55 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.374 10:43:55 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.374 10:43:55 json_config -- paths/export.sh@5 -- # export PATH 00:03:39.374 10:43:55 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@47 -- # : 0 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:39.374 10:43:55 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:03:39.374 INFO: JSON configuration test init 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 10:43:55 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:03:39.374 10:43:55 json_config -- json_config/common.sh@9 -- # local app=target 00:03:39.374 10:43:55 json_config -- json_config/common.sh@10 -- # shift 00:03:39.374 10:43:55 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:39.374 10:43:55 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:39.374 10:43:55 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:39.374 10:43:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:39.374 10:43:55 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:39.374 10:43:55 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2669484 00:03:39.374 10:43:55 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:39.374 10:43:55 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:39.374 Waiting for target to run... 00:03:39.374 10:43:55 json_config -- json_config/common.sh@25 -- # waitforlisten 2669484 /var/tmp/spdk_tgt.sock 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@827 -- # '[' -z 2669484 ']' 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:39.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:39.374 10:43:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:39.374 [2024-05-15 10:43:55.517025] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:39.374 [2024-05-15 10:43:55.517114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2669484 ] 00:03:39.374 EAL: No free 2048 kB hugepages reported on node 1 00:03:39.941 [2024-05-15 10:43:56.033979] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:39.941 [2024-05-15 10:43:56.141180] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:40.507 10:43:56 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:40.507 10:43:56 json_config -- common/autotest_common.sh@860 -- # return 0 00:03:40.507 10:43:56 json_config -- json_config/common.sh@26 -- # echo '' 00:03:40.507 00:03:40.507 10:43:56 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:03:40.507 10:43:56 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:03:40.507 10:43:56 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:40.507 10:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.507 10:43:56 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:03:40.507 10:43:56 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:03:40.507 10:43:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:40.507 10:43:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:40.507 10:43:56 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:40.507 10:43:56 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:03:40.507 10:43:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:43.787 10:43:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:43.787 10:43:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:03:43.787 10:43:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:03:43.787 10:43:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:43.787 10:43:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@55 -- # return 0 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:03:43.787 10:43:59 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:43.787 10:43:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:03:43.787 10:43:59 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:43.787 10:43:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:44.044 MallocForNvmf0 00:03:44.044 10:44:00 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:44.044 10:44:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:44.301 MallocForNvmf1 00:03:44.301 10:44:00 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:44.301 10:44:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:44.558 [2024-05-15 10:44:00.748858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:44.558 10:44:00 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:44.558 10:44:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:44.817 10:44:01 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:44.817 10:44:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:45.074 10:44:01 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:45.074 10:44:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:45.332 10:44:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:45.332 10:44:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:45.590 [2024-05-15 10:44:01.719547] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:03:45.590 [2024-05-15 10:44:01.720142] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:45.590 10:44:01 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:03:45.590 10:44:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.590 10:44:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.590 10:44:01 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:03:45.590 10:44:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.590 10:44:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.590 10:44:01 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:03:45.590 10:44:01 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:45.590 10:44:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:45.847 MallocBdevForConfigChangeCheck 00:03:45.847 10:44:02 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:03:45.847 10:44:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.847 10:44:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.847 10:44:02 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:03:45.847 10:44:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:46.413 10:44:02 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:03:46.413 INFO: shutting down applications... 00:03:46.413 10:44:02 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:03:46.413 10:44:02 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:03:46.413 10:44:02 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:03:46.413 10:44:02 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:48.355 Calling clear_iscsi_subsystem 00:03:48.355 Calling clear_nvmf_subsystem 00:03:48.355 Calling clear_nbd_subsystem 00:03:48.355 Calling clear_ublk_subsystem 00:03:48.355 Calling clear_vhost_blk_subsystem 00:03:48.355 Calling clear_vhost_scsi_subsystem 00:03:48.355 Calling clear_bdev_subsystem 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@343 -- # count=100 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@345 -- # break 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:03:48.355 10:44:04 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:03:48.355 10:44:04 json_config -- json_config/common.sh@31 -- # local app=target 00:03:48.355 10:44:04 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:48.355 10:44:04 json_config -- json_config/common.sh@35 -- # [[ -n 2669484 ]] 00:03:48.355 10:44:04 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2669484 00:03:48.355 [2024-05-15 10:44:04.467462] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:03:48.355 10:44:04 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:48.355 10:44:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:48.355 10:44:04 json_config -- json_config/common.sh@41 -- # kill -0 2669484 00:03:48.355 10:44:04 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:48.922 10:44:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:48.922 10:44:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:48.922 10:44:04 json_config -- json_config/common.sh@41 -- # kill -0 2669484 00:03:48.922 10:44:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:48.922 10:44:04 json_config -- json_config/common.sh@43 -- # break 00:03:48.922 10:44:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:48.922 10:44:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:48.922 SPDK target shutdown done 00:03:48.922 10:44:04 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:03:48.922 INFO: relaunching applications... 00:03:48.922 10:44:04 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:48.922 10:44:04 json_config -- json_config/common.sh@9 -- # local app=target 00:03:48.922 10:44:04 json_config -- json_config/common.sh@10 -- # shift 00:03:48.922 10:44:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:48.922 10:44:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:48.922 10:44:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:48.922 10:44:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.922 10:44:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:48.922 10:44:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2670800 00:03:48.922 10:44:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:48.922 10:44:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:48.922 Waiting for target to run... 00:03:48.922 10:44:04 json_config -- json_config/common.sh@25 -- # waitforlisten 2670800 /var/tmp/spdk_tgt.sock 00:03:48.922 10:44:04 json_config -- common/autotest_common.sh@827 -- # '[' -z 2670800 ']' 00:03:48.922 10:44:04 json_config -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:48.922 10:44:04 json_config -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:48.922 10:44:04 json_config -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:48.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:48.922 10:44:04 json_config -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:48.922 10:44:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:48.922 [2024-05-15 10:44:05.026529] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:48.922 [2024-05-15 10:44:05.026631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2670800 ] 00:03:48.922 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.490 [2024-05-15 10:44:05.573123] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.490 [2024-05-15 10:44:05.676593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.772 [2024-05-15 10:44:08.729986] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:52.772 [2024-05-15 10:44:08.761953] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:03:52.772 [2024-05-15 10:44:08.762510] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:53.338 10:44:09 json_config -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:53.338 10:44:09 json_config -- common/autotest_common.sh@860 -- # return 0 00:03:53.338 10:44:09 json_config -- json_config/common.sh@26 -- # echo '' 00:03:53.338 00:03:53.338 10:44:09 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:03:53.338 10:44:09 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:53.338 INFO: Checking if target configuration is the same... 00:03:53.338 10:44:09 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.338 10:44:09 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:03:53.338 10:44:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:53.338 + '[' 2 -ne 2 ']' 00:03:53.338 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:53.338 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:53.338 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.338 +++ basename /dev/fd/62 00:03:53.338 ++ mktemp /tmp/62.XXX 00:03:53.338 + tmp_file_1=/tmp/62.XDY 00:03:53.338 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:53.338 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:53.338 + tmp_file_2=/tmp/spdk_tgt_config.json.FpK 00:03:53.338 + ret=0 00:03:53.338 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:53.595 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:53.853 + diff -u /tmp/62.XDY /tmp/spdk_tgt_config.json.FpK 00:03:53.853 + echo 'INFO: JSON config files are the same' 00:03:53.853 INFO: JSON config files are the same 00:03:53.853 + rm /tmp/62.XDY /tmp/spdk_tgt_config.json.FpK 00:03:53.853 + exit 0 00:03:53.854 10:44:09 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:03:53.854 10:44:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:53.854 INFO: changing configuration and checking if this can be detected... 00:03:53.854 10:44:09 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:53.854 10:44:09 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:54.112 10:44:10 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.112 10:44:10 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:03:54.112 10:44:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:54.112 + '[' 2 -ne 2 ']' 00:03:54.112 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:54.112 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:54.112 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:54.112 +++ basename /dev/fd/62 00:03:54.112 ++ mktemp /tmp/62.XXX 00:03:54.112 + tmp_file_1=/tmp/62.UAx 00:03:54.112 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:54.112 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:54.112 + tmp_file_2=/tmp/spdk_tgt_config.json.fW0 00:03:54.112 + ret=0 00:03:54.112 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:54.370 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:54.370 + diff -u /tmp/62.UAx /tmp/spdk_tgt_config.json.fW0 00:03:54.370 + ret=1 00:03:54.370 + echo '=== Start of file: /tmp/62.UAx ===' 00:03:54.370 + cat /tmp/62.UAx 00:03:54.370 + echo '=== End of file: /tmp/62.UAx ===' 00:03:54.370 + echo '' 00:03:54.370 + echo '=== Start of file: /tmp/spdk_tgt_config.json.fW0 ===' 00:03:54.370 + cat /tmp/spdk_tgt_config.json.fW0 00:03:54.370 + echo '=== End of file: /tmp/spdk_tgt_config.json.fW0 ===' 00:03:54.370 + echo '' 00:03:54.370 + rm /tmp/62.UAx /tmp/spdk_tgt_config.json.fW0 00:03:54.370 + exit 1 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:03:54.370 INFO: configuration change detected. 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:03:54.370 10:44:10 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:54.370 10:44:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@317 -- # [[ -n 2670800 ]] 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:03:54.370 10:44:10 json_config -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:54.370 10:44:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@193 -- # uname -s 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:03:54.370 10:44:10 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:03:54.371 10:44:10 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:03:54.371 10:44:10 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:54.371 10:44:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:54.371 10:44:10 json_config -- json_config/json_config.sh@323 -- # killprocess 2670800 00:03:54.371 10:44:10 json_config -- common/autotest_common.sh@946 -- # '[' -z 2670800 ']' 00:03:54.371 10:44:10 json_config -- common/autotest_common.sh@950 -- # kill -0 2670800 00:03:54.371 10:44:10 json_config -- common/autotest_common.sh@951 -- # uname 00:03:54.371 10:44:10 json_config -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:54.371 10:44:10 json_config -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2670800 00:03:54.628 10:44:10 json_config -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:54.628 10:44:10 json_config -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:54.628 10:44:10 json_config -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2670800' 00:03:54.628 killing process with pid 2670800 00:03:54.628 10:44:10 json_config -- common/autotest_common.sh@965 -- # kill 2670800 00:03:54.628 [2024-05-15 10:44:10.625766] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:03:54.628 10:44:10 json_config -- common/autotest_common.sh@970 -- # wait 2670800 00:03:56.543 10:44:12 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:56.543 10:44:12 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:03:56.543 10:44:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:56.543 10:44:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.543 10:44:12 json_config -- json_config/json_config.sh@328 -- # return 0 00:03:56.543 10:44:12 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:03:56.543 INFO: Success 00:03:56.543 00:03:56.543 real 0m16.908s 00:03:56.543 user 0m18.749s 00:03:56.543 sys 0m2.306s 00:03:56.543 10:44:12 json_config -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:56.543 10:44:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:56.543 ************************************ 00:03:56.543 END TEST json_config 00:03:56.543 ************************************ 00:03:56.543 10:44:12 -- spdk/autotest.sh@182 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:56.543 10:44:12 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:56.543 10:44:12 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:56.543 10:44:12 -- common/autotest_common.sh@10 -- # set +x 00:03:56.543 ************************************ 00:03:56.543 START TEST json_config_extra_key 00:03:56.543 ************************************ 00:03:56.543 10:44:12 json_config_extra_key -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:56.543 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:56.543 10:44:12 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:56.543 10:44:12 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:56.543 10:44:12 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:56.544 10:44:12 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:56.544 10:44:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.544 10:44:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.544 10:44:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.544 10:44:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:56.544 10:44:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:56.544 10:44:12 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:56.544 INFO: launching applications... 00:03:56.544 10:44:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2671785 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:56.544 Waiting for target to run... 00:03:56.544 10:44:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2671785 /var/tmp/spdk_tgt.sock 00:03:56.544 10:44:12 json_config_extra_key -- common/autotest_common.sh@827 -- # '[' -z 2671785 ']' 00:03:56.544 10:44:12 json_config_extra_key -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:56.544 10:44:12 json_config_extra_key -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:56.544 10:44:12 json_config_extra_key -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:56.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:56.544 10:44:12 json_config_extra_key -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:56.544 10:44:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:56.544 [2024-05-15 10:44:12.474287] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:56.544 [2024-05-15 10:44:12.474374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2671785 ] 00:03:56.544 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.803 [2024-05-15 10:44:12.980216] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:57.062 [2024-05-15 10:44:13.087486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.320 10:44:13 json_config_extra_key -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:57.320 10:44:13 json_config_extra_key -- common/autotest_common.sh@860 -- # return 0 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:57.320 00:03:57.320 10:44:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:57.320 INFO: shutting down applications... 00:03:57.320 10:44:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2671785 ]] 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2671785 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2671785 00:03:57.320 10:44:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:57.885 10:44:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:57.885 10:44:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:57.885 10:44:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2671785 00:03:57.885 10:44:13 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:57.885 10:44:13 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:57.885 10:44:13 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:57.885 10:44:13 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:57.885 SPDK target shutdown done 00:03:57.885 10:44:13 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:57.885 Success 00:03:57.885 00:03:57.885 real 0m1.542s 00:03:57.885 user 0m1.396s 00:03:57.885 sys 0m0.589s 00:03:57.885 10:44:13 json_config_extra_key -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:57.885 10:44:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:57.885 ************************************ 00:03:57.885 END TEST json_config_extra_key 00:03:57.885 ************************************ 00:03:57.885 10:44:13 -- spdk/autotest.sh@183 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:57.885 10:44:13 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:57.885 10:44:13 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:57.885 10:44:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.885 ************************************ 00:03:57.885 START TEST alias_rpc 00:03:57.885 ************************************ 00:03:57.885 10:44:13 alias_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:57.885 * Looking for test storage... 00:03:57.885 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:57.885 10:44:14 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:57.885 10:44:14 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2672031 00:03:57.885 10:44:14 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:57.885 10:44:14 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2672031 00:03:57.885 10:44:14 alias_rpc -- common/autotest_common.sh@827 -- # '[' -z 2672031 ']' 00:03:57.885 10:44:14 alias_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:57.885 10:44:14 alias_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:57.885 10:44:14 alias_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:57.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:57.885 10:44:14 alias_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:57.885 10:44:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.885 [2024-05-15 10:44:14.076674] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:57.885 [2024-05-15 10:44:14.076765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672031 ] 00:03:57.885 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.143 [2024-05-15 10:44:14.171429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.143 [2024-05-15 10:44:14.302508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.402 10:44:14 alias_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:03:58.402 10:44:14 alias_rpc -- common/autotest_common.sh@860 -- # return 0 00:03:58.402 10:44:14 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:58.659 10:44:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2672031 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@946 -- # '[' -z 2672031 ']' 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@950 -- # kill -0 2672031 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@951 -- # uname 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2672031 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2672031' 00:03:58.659 killing process with pid 2672031 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@965 -- # kill 2672031 00:03:58.659 10:44:14 alias_rpc -- common/autotest_common.sh@970 -- # wait 2672031 00:03:59.225 00:03:59.225 real 0m1.376s 00:03:59.225 user 0m1.514s 00:03:59.225 sys 0m0.456s 00:03:59.225 10:44:15 alias_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:03:59.225 10:44:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.225 ************************************ 00:03:59.225 END TEST alias_rpc 00:03:59.225 ************************************ 00:03:59.225 10:44:15 -- spdk/autotest.sh@185 -- # [[ 0 -eq 0 ]] 00:03:59.225 10:44:15 -- spdk/autotest.sh@186 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:59.225 10:44:15 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:03:59.225 10:44:15 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:03:59.225 10:44:15 -- common/autotest_common.sh@10 -- # set +x 00:03:59.225 ************************************ 00:03:59.225 START TEST spdkcli_tcp 00:03:59.225 ************************************ 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:59.225 * Looking for test storage... 00:03:59.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2672216 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:59.225 10:44:15 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2672216 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@827 -- # '[' -z 2672216 ']' 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:03:59.225 10:44:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:59.484 [2024-05-15 10:44:15.501766] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:03:59.484 [2024-05-15 10:44:15.501860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672216 ] 00:03:59.484 EAL: No free 2048 kB hugepages reported on node 1 00:03:59.484 [2024-05-15 10:44:15.568361] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:59.484 [2024-05-15 10:44:15.675623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:03:59.484 [2024-05-15 10:44:15.675627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.418 10:44:16 spdkcli_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:00.418 10:44:16 spdkcli_tcp -- common/autotest_common.sh@860 -- # return 0 00:04:00.418 10:44:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2672353 00:04:00.418 10:44:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:00.418 10:44:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:00.675 [ 00:04:00.675 "bdev_malloc_delete", 00:04:00.675 "bdev_malloc_create", 00:04:00.675 "bdev_null_resize", 00:04:00.675 "bdev_null_delete", 00:04:00.675 "bdev_null_create", 00:04:00.675 "bdev_nvme_cuse_unregister", 00:04:00.675 "bdev_nvme_cuse_register", 00:04:00.675 "bdev_opal_new_user", 00:04:00.675 "bdev_opal_set_lock_state", 00:04:00.675 "bdev_opal_delete", 00:04:00.675 "bdev_opal_get_info", 00:04:00.675 "bdev_opal_create", 00:04:00.675 "bdev_nvme_opal_revert", 00:04:00.675 "bdev_nvme_opal_init", 00:04:00.675 "bdev_nvme_send_cmd", 00:04:00.675 "bdev_nvme_get_path_iostat", 00:04:00.675 "bdev_nvme_get_mdns_discovery_info", 00:04:00.675 "bdev_nvme_stop_mdns_discovery", 00:04:00.675 "bdev_nvme_start_mdns_discovery", 00:04:00.675 "bdev_nvme_set_multipath_policy", 00:04:00.675 "bdev_nvme_set_preferred_path", 00:04:00.675 "bdev_nvme_get_io_paths", 00:04:00.675 "bdev_nvme_remove_error_injection", 00:04:00.675 "bdev_nvme_add_error_injection", 00:04:00.675 "bdev_nvme_get_discovery_info", 00:04:00.675 "bdev_nvme_stop_discovery", 00:04:00.675 "bdev_nvme_start_discovery", 00:04:00.675 "bdev_nvme_get_controller_health_info", 00:04:00.675 "bdev_nvme_disable_controller", 00:04:00.675 "bdev_nvme_enable_controller", 00:04:00.675 "bdev_nvme_reset_controller", 00:04:00.675 "bdev_nvme_get_transport_statistics", 00:04:00.675 "bdev_nvme_apply_firmware", 00:04:00.675 "bdev_nvme_detach_controller", 00:04:00.675 "bdev_nvme_get_controllers", 00:04:00.675 "bdev_nvme_attach_controller", 00:04:00.675 "bdev_nvme_set_hotplug", 00:04:00.675 "bdev_nvme_set_options", 00:04:00.675 "bdev_passthru_delete", 00:04:00.675 "bdev_passthru_create", 00:04:00.675 "bdev_lvol_check_shallow_copy", 00:04:00.675 "bdev_lvol_start_shallow_copy", 00:04:00.675 "bdev_lvol_grow_lvstore", 00:04:00.675 "bdev_lvol_get_lvols", 00:04:00.675 "bdev_lvol_get_lvstores", 00:04:00.675 "bdev_lvol_delete", 00:04:00.675 "bdev_lvol_set_read_only", 00:04:00.675 "bdev_lvol_resize", 00:04:00.675 "bdev_lvol_decouple_parent", 00:04:00.675 "bdev_lvol_inflate", 00:04:00.675 "bdev_lvol_rename", 00:04:00.675 "bdev_lvol_clone_bdev", 00:04:00.675 "bdev_lvol_clone", 00:04:00.675 "bdev_lvol_snapshot", 00:04:00.675 "bdev_lvol_create", 00:04:00.675 "bdev_lvol_delete_lvstore", 00:04:00.675 "bdev_lvol_rename_lvstore", 00:04:00.675 "bdev_lvol_create_lvstore", 00:04:00.675 "bdev_raid_set_options", 00:04:00.675 "bdev_raid_remove_base_bdev", 00:04:00.675 "bdev_raid_add_base_bdev", 00:04:00.675 "bdev_raid_delete", 00:04:00.675 "bdev_raid_create", 00:04:00.675 "bdev_raid_get_bdevs", 00:04:00.675 "bdev_error_inject_error", 00:04:00.675 "bdev_error_delete", 00:04:00.675 "bdev_error_create", 00:04:00.675 "bdev_split_delete", 00:04:00.675 "bdev_split_create", 00:04:00.675 "bdev_delay_delete", 00:04:00.675 "bdev_delay_create", 00:04:00.675 "bdev_delay_update_latency", 00:04:00.675 "bdev_zone_block_delete", 00:04:00.675 "bdev_zone_block_create", 00:04:00.675 "blobfs_create", 00:04:00.675 "blobfs_detect", 00:04:00.675 "blobfs_set_cache_size", 00:04:00.675 "bdev_aio_delete", 00:04:00.675 "bdev_aio_rescan", 00:04:00.675 "bdev_aio_create", 00:04:00.675 "bdev_ftl_set_property", 00:04:00.675 "bdev_ftl_get_properties", 00:04:00.675 "bdev_ftl_get_stats", 00:04:00.675 "bdev_ftl_unmap", 00:04:00.675 "bdev_ftl_unload", 00:04:00.675 "bdev_ftl_delete", 00:04:00.675 "bdev_ftl_load", 00:04:00.675 "bdev_ftl_create", 00:04:00.675 "bdev_virtio_attach_controller", 00:04:00.675 "bdev_virtio_scsi_get_devices", 00:04:00.675 "bdev_virtio_detach_controller", 00:04:00.675 "bdev_virtio_blk_set_hotplug", 00:04:00.675 "bdev_iscsi_delete", 00:04:00.675 "bdev_iscsi_create", 00:04:00.675 "bdev_iscsi_set_options", 00:04:00.675 "accel_error_inject_error", 00:04:00.675 "ioat_scan_accel_module", 00:04:00.675 "dsa_scan_accel_module", 00:04:00.675 "iaa_scan_accel_module", 00:04:00.675 "vfu_virtio_create_scsi_endpoint", 00:04:00.675 "vfu_virtio_scsi_remove_target", 00:04:00.675 "vfu_virtio_scsi_add_target", 00:04:00.675 "vfu_virtio_create_blk_endpoint", 00:04:00.675 "vfu_virtio_delete_endpoint", 00:04:00.675 "keyring_file_remove_key", 00:04:00.675 "keyring_file_add_key", 00:04:00.675 "iscsi_get_histogram", 00:04:00.675 "iscsi_enable_histogram", 00:04:00.675 "iscsi_set_options", 00:04:00.675 "iscsi_get_auth_groups", 00:04:00.675 "iscsi_auth_group_remove_secret", 00:04:00.675 "iscsi_auth_group_add_secret", 00:04:00.675 "iscsi_delete_auth_group", 00:04:00.675 "iscsi_create_auth_group", 00:04:00.675 "iscsi_set_discovery_auth", 00:04:00.675 "iscsi_get_options", 00:04:00.675 "iscsi_target_node_request_logout", 00:04:00.675 "iscsi_target_node_set_redirect", 00:04:00.675 "iscsi_target_node_set_auth", 00:04:00.675 "iscsi_target_node_add_lun", 00:04:00.675 "iscsi_get_stats", 00:04:00.675 "iscsi_get_connections", 00:04:00.675 "iscsi_portal_group_set_auth", 00:04:00.675 "iscsi_start_portal_group", 00:04:00.675 "iscsi_delete_portal_group", 00:04:00.675 "iscsi_create_portal_group", 00:04:00.675 "iscsi_get_portal_groups", 00:04:00.675 "iscsi_delete_target_node", 00:04:00.675 "iscsi_target_node_remove_pg_ig_maps", 00:04:00.675 "iscsi_target_node_add_pg_ig_maps", 00:04:00.675 "iscsi_create_target_node", 00:04:00.675 "iscsi_get_target_nodes", 00:04:00.675 "iscsi_delete_initiator_group", 00:04:00.675 "iscsi_initiator_group_remove_initiators", 00:04:00.675 "iscsi_initiator_group_add_initiators", 00:04:00.675 "iscsi_create_initiator_group", 00:04:00.675 "iscsi_get_initiator_groups", 00:04:00.675 "nvmf_set_crdt", 00:04:00.675 "nvmf_set_config", 00:04:00.675 "nvmf_set_max_subsystems", 00:04:00.676 "nvmf_subsystem_get_listeners", 00:04:00.676 "nvmf_subsystem_get_qpairs", 00:04:00.676 "nvmf_subsystem_get_controllers", 00:04:00.676 "nvmf_get_stats", 00:04:00.676 "nvmf_get_transports", 00:04:00.676 "nvmf_create_transport", 00:04:00.676 "nvmf_get_targets", 00:04:00.676 "nvmf_delete_target", 00:04:00.676 "nvmf_create_target", 00:04:00.676 "nvmf_subsystem_allow_any_host", 00:04:00.676 "nvmf_subsystem_remove_host", 00:04:00.676 "nvmf_subsystem_add_host", 00:04:00.676 "nvmf_ns_remove_host", 00:04:00.676 "nvmf_ns_add_host", 00:04:00.676 "nvmf_subsystem_remove_ns", 00:04:00.676 "nvmf_subsystem_add_ns", 00:04:00.676 "nvmf_subsystem_listener_set_ana_state", 00:04:00.676 "nvmf_discovery_get_referrals", 00:04:00.676 "nvmf_discovery_remove_referral", 00:04:00.676 "nvmf_discovery_add_referral", 00:04:00.676 "nvmf_subsystem_remove_listener", 00:04:00.676 "nvmf_subsystem_add_listener", 00:04:00.676 "nvmf_delete_subsystem", 00:04:00.676 "nvmf_create_subsystem", 00:04:00.676 "nvmf_get_subsystems", 00:04:00.676 "env_dpdk_get_mem_stats", 00:04:00.676 "nbd_get_disks", 00:04:00.676 "nbd_stop_disk", 00:04:00.676 "nbd_start_disk", 00:04:00.676 "ublk_recover_disk", 00:04:00.676 "ublk_get_disks", 00:04:00.676 "ublk_stop_disk", 00:04:00.676 "ublk_start_disk", 00:04:00.676 "ublk_destroy_target", 00:04:00.676 "ublk_create_target", 00:04:00.676 "virtio_blk_create_transport", 00:04:00.676 "virtio_blk_get_transports", 00:04:00.676 "vhost_controller_set_coalescing", 00:04:00.676 "vhost_get_controllers", 00:04:00.676 "vhost_delete_controller", 00:04:00.676 "vhost_create_blk_controller", 00:04:00.676 "vhost_scsi_controller_remove_target", 00:04:00.676 "vhost_scsi_controller_add_target", 00:04:00.676 "vhost_start_scsi_controller", 00:04:00.676 "vhost_create_scsi_controller", 00:04:00.676 "thread_set_cpumask", 00:04:00.676 "framework_get_scheduler", 00:04:00.676 "framework_set_scheduler", 00:04:00.676 "framework_get_reactors", 00:04:00.676 "thread_get_io_channels", 00:04:00.676 "thread_get_pollers", 00:04:00.676 "thread_get_stats", 00:04:00.676 "framework_monitor_context_switch", 00:04:00.676 "spdk_kill_instance", 00:04:00.676 "log_enable_timestamps", 00:04:00.676 "log_get_flags", 00:04:00.676 "log_clear_flag", 00:04:00.676 "log_set_flag", 00:04:00.676 "log_get_level", 00:04:00.676 "log_set_level", 00:04:00.676 "log_get_print_level", 00:04:00.676 "log_set_print_level", 00:04:00.676 "framework_enable_cpumask_locks", 00:04:00.676 "framework_disable_cpumask_locks", 00:04:00.676 "framework_wait_init", 00:04:00.676 "framework_start_init", 00:04:00.676 "scsi_get_devices", 00:04:00.676 "bdev_get_histogram", 00:04:00.676 "bdev_enable_histogram", 00:04:00.676 "bdev_set_qos_limit", 00:04:00.676 "bdev_set_qd_sampling_period", 00:04:00.676 "bdev_get_bdevs", 00:04:00.676 "bdev_reset_iostat", 00:04:00.676 "bdev_get_iostat", 00:04:00.676 "bdev_examine", 00:04:00.676 "bdev_wait_for_examine", 00:04:00.676 "bdev_set_options", 00:04:00.676 "notify_get_notifications", 00:04:00.676 "notify_get_types", 00:04:00.676 "accel_get_stats", 00:04:00.676 "accel_set_options", 00:04:00.676 "accel_set_driver", 00:04:00.676 "accel_crypto_key_destroy", 00:04:00.676 "accel_crypto_keys_get", 00:04:00.676 "accel_crypto_key_create", 00:04:00.676 "accel_assign_opc", 00:04:00.676 "accel_get_module_info", 00:04:00.676 "accel_get_opc_assignments", 00:04:00.676 "vmd_rescan", 00:04:00.676 "vmd_remove_device", 00:04:00.676 "vmd_enable", 00:04:00.676 "sock_get_default_impl", 00:04:00.676 "sock_set_default_impl", 00:04:00.676 "sock_impl_set_options", 00:04:00.676 "sock_impl_get_options", 00:04:00.676 "iobuf_get_stats", 00:04:00.676 "iobuf_set_options", 00:04:00.676 "keyring_get_keys", 00:04:00.676 "framework_get_pci_devices", 00:04:00.676 "framework_get_config", 00:04:00.676 "framework_get_subsystems", 00:04:00.676 "vfu_tgt_set_base_path", 00:04:00.676 "trace_get_info", 00:04:00.676 "trace_get_tpoint_group_mask", 00:04:00.676 "trace_disable_tpoint_group", 00:04:00.676 "trace_enable_tpoint_group", 00:04:00.676 "trace_clear_tpoint_mask", 00:04:00.676 "trace_set_tpoint_mask", 00:04:00.676 "spdk_get_version", 00:04:00.676 "rpc_get_methods" 00:04:00.676 ] 00:04:00.676 10:44:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:00.676 10:44:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:00.676 10:44:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2672216 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@946 -- # '[' -z 2672216 ']' 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@950 -- # kill -0 2672216 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@951 -- # uname 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2672216 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2672216' 00:04:00.676 killing process with pid 2672216 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@965 -- # kill 2672216 00:04:00.676 10:44:16 spdkcli_tcp -- common/autotest_common.sh@970 -- # wait 2672216 00:04:01.241 00:04:01.241 real 0m1.778s 00:04:01.241 user 0m3.388s 00:04:01.241 sys 0m0.489s 00:04:01.241 10:44:17 spdkcli_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:01.241 10:44:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:01.241 ************************************ 00:04:01.241 END TEST spdkcli_tcp 00:04:01.241 ************************************ 00:04:01.241 10:44:17 -- spdk/autotest.sh@189 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:01.241 10:44:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:01.241 10:44:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:01.241 10:44:17 -- common/autotest_common.sh@10 -- # set +x 00:04:01.241 ************************************ 00:04:01.241 START TEST dpdk_mem_utility 00:04:01.241 ************************************ 00:04:01.241 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:01.241 * Looking for test storage... 00:04:01.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:01.241 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:01.241 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2672553 00:04:01.241 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:01.241 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2672553 00:04:01.241 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@827 -- # '[' -z 2672553 ']' 00:04:01.241 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.241 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:01.241 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.241 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:01.241 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:01.241 [2024-05-15 10:44:17.333814] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:01.241 [2024-05-15 10:44:17.333905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672553 ] 00:04:01.241 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.241 [2024-05-15 10:44:17.400616] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.499 [2024-05-15 10:44:17.506649] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.757 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:01.757 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@860 -- # return 0 00:04:01.757 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:01.757 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:01.757 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:01.757 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:01.757 { 00:04:01.757 "filename": "/tmp/spdk_mem_dump.txt" 00:04:01.757 } 00:04:01.757 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:01.757 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:01.757 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:01.757 1 heaps totaling size 814.000000 MiB 00:04:01.757 size: 814.000000 MiB heap id: 0 00:04:01.757 end heaps---------- 00:04:01.757 8 mempools totaling size 598.116089 MiB 00:04:01.757 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:01.757 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:01.757 size: 84.521057 MiB name: bdev_io_2672553 00:04:01.757 size: 51.011292 MiB name: evtpool_2672553 00:04:01.757 size: 50.003479 MiB name: msgpool_2672553 00:04:01.757 size: 21.763794 MiB name: PDU_Pool 00:04:01.757 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:01.757 size: 0.026123 MiB name: Session_Pool 00:04:01.757 end mempools------- 00:04:01.757 6 memzones totaling size 4.142822 MiB 00:04:01.757 size: 1.000366 MiB name: RG_ring_0_2672553 00:04:01.757 size: 1.000366 MiB name: RG_ring_1_2672553 00:04:01.757 size: 1.000366 MiB name: RG_ring_4_2672553 00:04:01.757 size: 1.000366 MiB name: RG_ring_5_2672553 00:04:01.757 size: 0.125366 MiB name: RG_ring_2_2672553 00:04:01.757 size: 0.015991 MiB name: RG_ring_3_2672553 00:04:01.757 end memzones------- 00:04:01.757 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:01.757 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:01.757 list of free elements. size: 12.519348 MiB 00:04:01.757 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:01.757 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:01.757 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:01.757 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:01.757 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:01.757 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:01.758 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:01.758 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:01.758 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:01.758 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:01.758 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:01.758 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:01.758 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:01.758 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:01.758 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:01.758 list of standard malloc elements. size: 199.218079 MiB 00:04:01.758 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:01.758 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:01.758 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:01.758 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:01.758 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:01.758 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:01.758 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:01.758 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:01.758 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:01.758 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:01.758 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:01.758 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:01.758 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:01.758 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:01.758 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:01.758 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:01.758 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:01.758 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:01.758 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:01.758 list of memzone associated elements. size: 602.262573 MiB 00:04:01.758 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:01.758 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:01.758 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:01.758 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:01.758 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:01.758 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2672553_0 00:04:01.758 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:01.758 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2672553_0 00:04:01.758 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:01.758 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2672553_0 00:04:01.758 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:01.758 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:01.758 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:01.758 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:01.758 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:01.758 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2672553 00:04:01.758 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:01.758 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2672553 00:04:01.758 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:01.758 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2672553 00:04:01.758 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:01.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:01.758 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:01.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:01.758 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:01.758 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:01.758 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:01.758 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:01.758 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:01.758 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2672553 00:04:01.758 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:01.758 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2672553 00:04:01.758 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:01.758 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2672553 00:04:01.758 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:01.758 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2672553 00:04:01.758 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:01.758 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2672553 00:04:01.758 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:01.758 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:01.758 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:01.758 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:01.758 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:01.758 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:01.758 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:01.758 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2672553 00:04:01.758 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:01.758 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:01.758 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:01.758 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:01.758 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:01.758 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2672553 00:04:01.758 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:01.758 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:01.758 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:01.758 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2672553 00:04:01.758 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:01.758 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2672553 00:04:01.758 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:01.758 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:01.758 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:01.758 10:44:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2672553 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@946 -- # '[' -z 2672553 ']' 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@950 -- # kill -0 2672553 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # uname 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2672553 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2672553' 00:04:01.758 killing process with pid 2672553 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@965 -- # kill 2672553 00:04:01.758 10:44:17 dpdk_mem_utility -- common/autotest_common.sh@970 -- # wait 2672553 00:04:02.324 00:04:02.324 real 0m1.112s 00:04:02.324 user 0m1.079s 00:04:02.324 sys 0m0.398s 00:04:02.324 10:44:18 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:02.324 10:44:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:02.324 ************************************ 00:04:02.324 END TEST dpdk_mem_utility 00:04:02.324 ************************************ 00:04:02.324 10:44:18 -- spdk/autotest.sh@190 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:02.324 10:44:18 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:02.324 10:44:18 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.324 10:44:18 -- common/autotest_common.sh@10 -- # set +x 00:04:02.324 ************************************ 00:04:02.324 START TEST event 00:04:02.324 ************************************ 00:04:02.324 10:44:18 event -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:02.324 * Looking for test storage... 00:04:02.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:02.324 10:44:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:02.324 10:44:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:02.324 10:44:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:02.324 10:44:18 event -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:04:02.324 10:44:18 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:02.324 10:44:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:02.324 ************************************ 00:04:02.324 START TEST event_perf 00:04:02.324 ************************************ 00:04:02.324 10:44:18 event.event_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:02.324 Running I/O for 1 seconds...[2024-05-15 10:44:18.494368] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:02.324 [2024-05-15 10:44:18.494432] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672741 ] 00:04:02.324 EAL: No free 2048 kB hugepages reported on node 1 00:04:02.583 [2024-05-15 10:44:18.572339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:02.583 [2024-05-15 10:44:18.691213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.583 [2024-05-15 10:44:18.691268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:02.583 [2024-05-15 10:44:18.691385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:02.583 [2024-05-15 10:44:18.691388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.985 Running I/O for 1 seconds... 00:04:03.985 lcore 0: 235006 00:04:03.985 lcore 1: 235004 00:04:03.985 lcore 2: 235006 00:04:03.985 lcore 3: 235006 00:04:03.985 done. 00:04:03.985 00:04:03.985 real 0m1.335s 00:04:03.985 user 0m4.228s 00:04:03.985 sys 0m0.103s 00:04:03.985 10:44:19 event.event_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:03.985 10:44:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:03.985 ************************************ 00:04:03.985 END TEST event_perf 00:04:03.985 ************************************ 00:04:03.985 10:44:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:03.985 10:44:19 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:03.985 10:44:19 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:03.985 10:44:19 event -- common/autotest_common.sh@10 -- # set +x 00:04:03.985 ************************************ 00:04:03.985 START TEST event_reactor 00:04:03.985 ************************************ 00:04:03.985 10:44:19 event.event_reactor -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:03.985 [2024-05-15 10:44:19.878855] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:03.985 [2024-05-15 10:44:19.878918] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2672901 ] 00:04:03.985 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.985 [2024-05-15 10:44:19.949438] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.985 [2024-05-15 10:44:20.072302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.357 test_start 00:04:05.357 oneshot 00:04:05.357 tick 100 00:04:05.357 tick 100 00:04:05.357 tick 250 00:04:05.357 tick 100 00:04:05.357 tick 100 00:04:05.357 tick 100 00:04:05.357 tick 250 00:04:05.357 tick 500 00:04:05.357 tick 100 00:04:05.357 tick 100 00:04:05.357 tick 250 00:04:05.358 tick 100 00:04:05.358 tick 100 00:04:05.358 test_end 00:04:05.358 00:04:05.358 real 0m1.328s 00:04:05.358 user 0m1.227s 00:04:05.358 sys 0m0.096s 00:04:05.358 10:44:21 event.event_reactor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:05.358 10:44:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:05.358 ************************************ 00:04:05.358 END TEST event_reactor 00:04:05.358 ************************************ 00:04:05.358 10:44:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:05.358 10:44:21 event -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:04:05.358 10:44:21 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:05.358 10:44:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:05.358 ************************************ 00:04:05.358 START TEST event_reactor_perf 00:04:05.358 ************************************ 00:04:05.358 10:44:21 event.event_reactor_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:05.358 [2024-05-15 10:44:21.257460] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:05.358 [2024-05-15 10:44:21.257526] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673059 ] 00:04:05.358 EAL: No free 2048 kB hugepages reported on node 1 00:04:05.358 [2024-05-15 10:44:21.333425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:05.358 [2024-05-15 10:44:21.451464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.732 test_start 00:04:06.732 test_end 00:04:06.732 Performance: 351691 events per second 00:04:06.732 00:04:06.732 real 0m1.328s 00:04:06.732 user 0m1.236s 00:04:06.732 sys 0m0.087s 00:04:06.732 10:44:22 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:06.732 10:44:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:06.732 ************************************ 00:04:06.732 END TEST event_reactor_perf 00:04:06.732 ************************************ 00:04:06.732 10:44:22 event -- event/event.sh@49 -- # uname -s 00:04:06.732 10:44:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:06.732 10:44:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:06.732 10:44:22 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:06.732 10:44:22 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:06.732 10:44:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.732 ************************************ 00:04:06.732 START TEST event_scheduler 00:04:06.732 ************************************ 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:06.732 * Looking for test storage... 00:04:06.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:06.732 10:44:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:06.732 10:44:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2673359 00:04:06.732 10:44:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:06.732 10:44:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.732 10:44:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2673359 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@827 -- # '[' -z 2673359 ']' 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:06.732 [2024-05-15 10:44:22.710847] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:06.732 [2024-05-15 10:44:22.710921] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673359 ] 00:04:06.732 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.732 [2024-05-15 10:44:22.778324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:06.732 [2024-05-15 10:44:22.888845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.732 [2024-05-15 10:44:22.888909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.732 [2024-05-15 10:44:22.889042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:06.732 [2024-05-15 10:44:22.889045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@860 -- # return 0 00:04:06.732 10:44:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:06.732 POWER: Env isn't set yet! 00:04:06.732 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:06.732 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:06.732 POWER: Cannot get available frequencies of lcore 0 00:04:06.732 POWER: Attempting to initialise PSTAT power management... 00:04:06.732 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:06.732 POWER: Initialized successfully for lcore 0 power management 00:04:06.732 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:06.732 POWER: Initialized successfully for lcore 1 power management 00:04:06.732 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:06.732 POWER: Initialized successfully for lcore 2 power management 00:04:06.732 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:06.732 POWER: Initialized successfully for lcore 3 power management 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:06.732 10:44:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:06.732 10:44:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.002 [2024-05-15 10:44:23.045036] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:07.002 10:44:23 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:07.003 10:44:23 event.event_scheduler -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:07.003 10:44:23 event.event_scheduler -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 ************************************ 00:04:07.003 START TEST scheduler_create_thread 00:04:07.003 ************************************ 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1121 -- # scheduler_create_thread 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 2 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 3 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 4 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 5 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 6 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 7 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 8 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 9 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 10 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:07.003 10:44:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:08.906 10:44:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:08.906 10:44:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:08.906 10:44:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:08.906 10:44:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:08.906 10:44:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.472 10:44:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:09.472 00:04:09.472 real 0m2.619s 00:04:09.472 user 0m0.013s 00:04:09.472 sys 0m0.002s 00:04:09.472 10:44:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:09.472 10:44:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:09.472 ************************************ 00:04:09.472 END TEST scheduler_create_thread 00:04:09.472 ************************************ 00:04:09.729 10:44:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:09.730 10:44:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2673359 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@946 -- # '[' -z 2673359 ']' 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@950 -- # kill -0 2673359 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@951 -- # uname 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2673359 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2673359' 00:04:09.730 killing process with pid 2673359 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@965 -- # kill 2673359 00:04:09.730 10:44:25 event.event_scheduler -- common/autotest_common.sh@970 -- # wait 2673359 00:04:09.988 [2024-05-15 10:44:26.180086] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:10.246 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:10.246 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:10.246 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:10.246 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:10.246 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:10.246 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:10.246 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:10.246 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:10.246 00:04:10.246 real 0m3.823s 00:04:10.246 user 0m5.717s 00:04:10.246 sys 0m0.313s 00:04:10.246 10:44:26 event.event_scheduler -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:10.246 10:44:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:10.246 ************************************ 00:04:10.246 END TEST event_scheduler 00:04:10.246 ************************************ 00:04:10.246 10:44:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:10.246 10:44:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:10.246 10:44:26 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:10.246 10:44:26 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:10.505 10:44:26 event -- common/autotest_common.sh@10 -- # set +x 00:04:10.505 ************************************ 00:04:10.505 START TEST app_repeat 00:04:10.505 ************************************ 00:04:10.505 10:44:26 event.app_repeat -- common/autotest_common.sh@1121 -- # app_repeat_test 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2673817 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2673817' 00:04:10.505 Process app_repeat pid: 2673817 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:10.505 spdk_app_start Round 0 00:04:10.505 10:44:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2673817 /var/tmp/spdk-nbd.sock 00:04:10.505 10:44:26 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2673817 ']' 00:04:10.505 10:44:26 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:10.505 10:44:26 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:10.505 10:44:26 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:10.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:10.505 10:44:26 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:10.505 10:44:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:10.505 [2024-05-15 10:44:26.538255] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:10.505 [2024-05-15 10:44:26.538324] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673817 ] 00:04:10.505 EAL: No free 2048 kB hugepages reported on node 1 00:04:10.505 [2024-05-15 10:44:26.612926] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:10.505 [2024-05-15 10:44:26.728667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:10.505 [2024-05-15 10:44:26.728673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.763 10:44:26 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:10.763 10:44:26 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:10.763 10:44:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:11.021 Malloc0 00:04:11.021 10:44:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:11.280 Malloc1 00:04:11.280 10:44:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.280 10:44:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:11.537 /dev/nbd0 00:04:11.537 10:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:11.537 10:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.537 1+0 records in 00:04:11.537 1+0 records out 00:04:11.537 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176739 s, 23.2 MB/s 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:11.537 10:44:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:11.537 10:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.537 10:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.538 10:44:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:11.795 /dev/nbd1 00:04:11.795 10:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:11.795 10:44:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:11.795 1+0 records in 00:04:11.795 1+0 records out 00:04:11.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222678 s, 18.4 MB/s 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:11.795 10:44:27 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:11.795 10:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:11.795 10:44:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:11.795 10:44:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:11.795 10:44:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:11.795 10:44:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:12.053 { 00:04:12.053 "nbd_device": "/dev/nbd0", 00:04:12.053 "bdev_name": "Malloc0" 00:04:12.053 }, 00:04:12.053 { 00:04:12.053 "nbd_device": "/dev/nbd1", 00:04:12.053 "bdev_name": "Malloc1" 00:04:12.053 } 00:04:12.053 ]' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:12.053 { 00:04:12.053 "nbd_device": "/dev/nbd0", 00:04:12.053 "bdev_name": "Malloc0" 00:04:12.053 }, 00:04:12.053 { 00:04:12.053 "nbd_device": "/dev/nbd1", 00:04:12.053 "bdev_name": "Malloc1" 00:04:12.053 } 00:04:12.053 ]' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:12.053 /dev/nbd1' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:12.053 /dev/nbd1' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:12.053 256+0 records in 00:04:12.053 256+0 records out 00:04:12.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499983 s, 210 MB/s 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:12.053 256+0 records in 00:04:12.053 256+0 records out 00:04:12.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237739 s, 44.1 MB/s 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:12.053 256+0 records in 00:04:12.053 256+0 records out 00:04:12.053 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251412 s, 41.7 MB/s 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:12.053 10:44:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:12.311 10:44:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.569 10:44:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:13.135 10:44:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:13.135 10:44:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:13.392 10:44:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:13.651 [2024-05-15 10:44:29.689238] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:13.651 [2024-05-15 10:44:29.804103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.651 [2024-05-15 10:44:29.804103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:13.651 [2024-05-15 10:44:29.865819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:13.651 [2024-05-15 10:44:29.865897] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:16.179 10:44:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:16.179 10:44:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:16.179 spdk_app_start Round 1 00:04:16.179 10:44:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2673817 /var/tmp/spdk-nbd.sock 00:04:16.179 10:44:32 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2673817 ']' 00:04:16.179 10:44:32 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:16.179 10:44:32 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:16.179 10:44:32 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:16.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:16.179 10:44:32 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:16.179 10:44:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:16.437 10:44:32 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:16.437 10:44:32 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:16.437 10:44:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.695 Malloc0 00:04:16.695 10:44:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:16.953 Malloc1 00:04:16.953 10:44:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:16.953 10:44:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:17.211 /dev/nbd0 00:04:17.211 10:44:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:17.211 10:44:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:17.211 1+0 records in 00:04:17.211 1+0 records out 00:04:17.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000171576 s, 23.9 MB/s 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:17.211 10:44:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:17.211 10:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:17.211 10:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.211 10:44:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:17.469 /dev/nbd1 00:04:17.469 10:44:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:17.469 10:44:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:17.469 1+0 records in 00:04:17.469 1+0 records out 00:04:17.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000152324 s, 26.9 MB/s 00:04:17.469 10:44:33 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.726 10:44:33 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:17.726 10:44:33 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.726 10:44:33 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:17.726 10:44:33 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:17.726 10:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:17.726 10:44:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.726 10:44:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:17.726 10:44:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.726 10:44:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:17.726 10:44:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:17.726 { 00:04:17.726 "nbd_device": "/dev/nbd0", 00:04:17.726 "bdev_name": "Malloc0" 00:04:17.726 }, 00:04:17.726 { 00:04:17.726 "nbd_device": "/dev/nbd1", 00:04:17.726 "bdev_name": "Malloc1" 00:04:17.726 } 00:04:17.726 ]' 00:04:17.726 10:44:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:17.726 { 00:04:17.726 "nbd_device": "/dev/nbd0", 00:04:17.726 "bdev_name": "Malloc0" 00:04:17.727 }, 00:04:17.727 { 00:04:17.727 "nbd_device": "/dev/nbd1", 00:04:17.727 "bdev_name": "Malloc1" 00:04:17.727 } 00:04:17.727 ]' 00:04:17.727 10:44:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:17.989 /dev/nbd1' 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:17.989 /dev/nbd1' 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:17.989 256+0 records in 00:04:17.989 256+0 records out 00:04:17.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00378048 s, 277 MB/s 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.989 10:44:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:17.989 256+0 records in 00:04:17.989 256+0 records out 00:04:17.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208369 s, 50.3 MB/s 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:17.989 256+0 records in 00:04:17.989 256+0 records out 00:04:17.989 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024878 s, 42.1 MB/s 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:17.989 10:44:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.296 10:44:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.554 10:44:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:18.812 10:44:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:18.812 10:44:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:19.070 10:44:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:19.329 [2024-05-15 10:44:35.443387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.329 [2024-05-15 10:44:35.558279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.329 [2024-05-15 10:44:35.558282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.587 [2024-05-15 10:44:35.621615] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.587 [2024-05-15 10:44:35.621694] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:22.113 10:44:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:22.113 10:44:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:22.113 spdk_app_start Round 2 00:04:22.113 10:44:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2673817 /var/tmp/spdk-nbd.sock 00:04:22.113 10:44:38 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2673817 ']' 00:04:22.113 10:44:38 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.113 10:44:38 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:22.113 10:44:38 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.113 10:44:38 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:22.113 10:44:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.371 10:44:38 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:22.371 10:44:38 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:22.371 10:44:38 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.629 Malloc0 00:04:22.629 10:44:38 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:22.888 Malloc1 00:04:22.888 10:44:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:22.888 10:44:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:23.146 /dev/nbd0 00:04:23.146 10:44:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:23.146 10:44:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd0 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd0 /proc/partitions 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.146 1+0 records in 00:04:23.146 1+0 records out 00:04:23.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000162675 s, 25.2 MB/s 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:23.146 10:44:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:23.146 10:44:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.146 10:44:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.146 10:44:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:23.404 /dev/nbd1 00:04:23.404 10:44:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:23.404 10:44:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@864 -- # local nbd_name=nbd1 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@865 -- # local i 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i = 1 )) 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@867 -- # (( i <= 20 )) 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@868 -- # grep -q -w nbd1 /proc/partitions 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@869 -- # break 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i = 1 )) 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@880 -- # (( i <= 20 )) 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@881 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:23.404 1+0 records in 00:04:23.404 1+0 records out 00:04:23.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185092 s, 22.1 MB/s 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@882 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@882 -- # size=4096 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@883 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@884 -- # '[' 4096 '!=' 0 ']' 00:04:23.404 10:44:39 event.app_repeat -- common/autotest_common.sh@885 -- # return 0 00:04:23.404 10:44:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:23.404 10:44:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:23.404 10:44:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:23.404 10:44:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.404 10:44:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:23.662 { 00:04:23.662 "nbd_device": "/dev/nbd0", 00:04:23.662 "bdev_name": "Malloc0" 00:04:23.662 }, 00:04:23.662 { 00:04:23.662 "nbd_device": "/dev/nbd1", 00:04:23.662 "bdev_name": "Malloc1" 00:04:23.662 } 00:04:23.662 ]' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:23.662 { 00:04:23.662 "nbd_device": "/dev/nbd0", 00:04:23.662 "bdev_name": "Malloc0" 00:04:23.662 }, 00:04:23.662 { 00:04:23.662 "nbd_device": "/dev/nbd1", 00:04:23.662 "bdev_name": "Malloc1" 00:04:23.662 } 00:04:23.662 ]' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:23.662 /dev/nbd1' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:23.662 /dev/nbd1' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:23.662 256+0 records in 00:04:23.662 256+0 records out 00:04:23.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00503124 s, 208 MB/s 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:23.662 256+0 records in 00:04:23.662 256+0 records out 00:04:23.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02404 s, 43.6 MB/s 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:23.662 256+0 records in 00:04:23.662 256+0 records out 00:04:23.662 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0229992 s, 45.6 MB/s 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:23.662 10:44:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.663 10:44:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:23.921 10:44:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.179 10:44:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:24.436 10:44:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:24.436 10:44:40 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:24.693 10:44:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:24.951 [2024-05-15 10:44:41.163752] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:25.209 [2024-05-15 10:44:41.279721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.209 [2024-05-15 10:44:41.279721] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:25.209 [2024-05-15 10:44:41.342726] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:25.209 [2024-05-15 10:44:41.342802] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:27.735 10:44:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2673817 /var/tmp/spdk-nbd.sock 00:04:27.735 10:44:43 event.app_repeat -- common/autotest_common.sh@827 -- # '[' -z 2673817 ']' 00:04:27.735 10:44:43 event.app_repeat -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:27.735 10:44:43 event.app_repeat -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:27.735 10:44:43 event.app_repeat -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:27.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:27.735 10:44:43 event.app_repeat -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:27.735 10:44:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@860 -- # return 0 00:04:27.992 10:44:44 event.app_repeat -- event/event.sh@39 -- # killprocess 2673817 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@946 -- # '[' -z 2673817 ']' 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@950 -- # kill -0 2673817 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@951 -- # uname 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2673817 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2673817' 00:04:27.992 killing process with pid 2673817 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@965 -- # kill 2673817 00:04:27.992 10:44:44 event.app_repeat -- common/autotest_common.sh@970 -- # wait 2673817 00:04:28.251 spdk_app_start is called in Round 0. 00:04:28.251 Shutdown signal received, stop current app iteration 00:04:28.251 Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 reinitialization... 00:04:28.251 spdk_app_start is called in Round 1. 00:04:28.251 Shutdown signal received, stop current app iteration 00:04:28.251 Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 reinitialization... 00:04:28.251 spdk_app_start is called in Round 2. 00:04:28.251 Shutdown signal received, stop current app iteration 00:04:28.251 Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 reinitialization... 00:04:28.251 spdk_app_start is called in Round 3. 00:04:28.251 Shutdown signal received, stop current app iteration 00:04:28.251 10:44:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:28.251 10:44:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:28.251 00:04:28.251 real 0m17.889s 00:04:28.251 user 0m38.920s 00:04:28.251 sys 0m3.444s 00:04:28.251 10:44:44 event.app_repeat -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:28.251 10:44:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:28.251 ************************************ 00:04:28.251 END TEST app_repeat 00:04:28.251 ************************************ 00:04:28.251 10:44:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:28.251 10:44:44 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.251 10:44:44 event -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.251 10:44:44 event -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.251 10:44:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:28.251 ************************************ 00:04:28.251 START TEST cpu_locks 00:04:28.251 ************************************ 00:04:28.251 10:44:44 event.cpu_locks -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:28.511 * Looking for test storage... 00:04:28.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:28.511 10:44:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:28.511 10:44:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:28.511 10:44:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:28.511 10:44:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:28.511 10:44:44 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:28.511 10:44:44 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:28.511 10:44:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.511 ************************************ 00:04:28.511 START TEST default_locks 00:04:28.511 ************************************ 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1121 -- # default_locks 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2676166 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2676166 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2676166 ']' 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:28.511 10:44:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.511 [2024-05-15 10:44:44.594569] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:28.511 [2024-05-15 10:44:44.594658] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676166 ] 00:04:28.511 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.511 [2024-05-15 10:44:44.673285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.769 [2024-05-15 10:44:44.792258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.028 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:29.028 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 0 00:04:29.028 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2676166 00:04:29.028 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2676166 00:04:29.028 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.285 lslocks: write error 00:04:29.285 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2676166 00:04:29.285 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@946 -- # '[' -z 2676166 ']' 00:04:29.285 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # kill -0 2676166 00:04:29.285 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # uname 00:04:29.285 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:29.285 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2676166 00:04:29.285 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:29.286 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:29.286 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2676166' 00:04:29.286 killing process with pid 2676166 00:04:29.286 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # kill 2676166 00:04:29.286 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # wait 2676166 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2676166 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2676166 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 2676166 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@827 -- # '[' -z 2676166 ']' 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2676166) - No such process 00:04:29.851 ERROR: process (pid: 2676166) is no longer running 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # return 1 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:29.851 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:29.852 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:29.852 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:29.852 10:44:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:29.852 00:04:29.852 real 0m1.327s 00:04:29.852 user 0m1.284s 00:04:29.852 sys 0m0.555s 00:04:29.852 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:29.852 10:44:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.852 ************************************ 00:04:29.852 END TEST default_locks 00:04:29.852 ************************************ 00:04:29.852 10:44:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:29.852 10:44:45 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:29.852 10:44:45 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:29.852 10:44:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:29.852 ************************************ 00:04:29.852 START TEST default_locks_via_rpc 00:04:29.852 ************************************ 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1121 -- # default_locks_via_rpc 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2676331 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2676331 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2676331 ']' 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:29.852 10:44:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.852 [2024-05-15 10:44:45.975557] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:29.852 [2024-05-15 10:44:45.975639] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676331 ] 00:04:29.852 EAL: No free 2048 kB hugepages reported on node 1 00:04:29.852 [2024-05-15 10:44:46.050313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.110 [2024-05-15 10:44:46.167295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2676331 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2676331 00:04:31.044 10:44:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2676331 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@946 -- # '[' -z 2676331 ']' 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # kill -0 2676331 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # uname 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2676331 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2676331' 00:04:31.044 killing process with pid 2676331 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # kill 2676331 00:04:31.044 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # wait 2676331 00:04:31.611 00:04:31.611 real 0m1.735s 00:04:31.611 user 0m1.849s 00:04:31.611 sys 0m0.572s 00:04:31.611 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:31.611 10:44:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.611 ************************************ 00:04:31.611 END TEST default_locks_via_rpc 00:04:31.611 ************************************ 00:04:31.611 10:44:47 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:31.611 10:44:47 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:31.611 10:44:47 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:31.611 10:44:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:31.611 ************************************ 00:04:31.611 START TEST non_locking_app_on_locked_coremask 00:04:31.611 ************************************ 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # non_locking_app_on_locked_coremask 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2676621 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2676621 /var/tmp/spdk.sock 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2676621 ']' 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:31.611 10:44:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.611 [2024-05-15 10:44:47.761963] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:31.611 [2024-05-15 10:44:47.762042] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676621 ] 00:04:31.611 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.611 [2024-05-15 10:44:47.829767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.870 [2024-05-15 10:44:47.940526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2676629 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2676629 /var/tmp/spdk2.sock 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2676629 ']' 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:32.129 10:44:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.129 [2024-05-15 10:44:48.250319] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:32.129 [2024-05-15 10:44:48.250390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2676629 ] 00:04:32.129 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.387 [2024-05-15 10:44:48.362338] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:32.387 [2024-05-15 10:44:48.362373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.387 [2024-05-15 10:44:48.596155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.357 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:33.357 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:33.357 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2676621 00:04:33.357 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2676621 00:04:33.357 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:33.616 lslocks: write error 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2676621 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2676621 ']' 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2676621 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2676621 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2676621' 00:04:33.616 killing process with pid 2676621 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2676621 00:04:33.616 10:44:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2676621 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2676629 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2676629 ']' 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2676629 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2676629 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2676629' 00:04:34.549 killing process with pid 2676629 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2676629 00:04:34.549 10:44:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2676629 00:04:35.113 00:04:35.113 real 0m3.467s 00:04:35.113 user 0m3.582s 00:04:35.113 sys 0m1.107s 00:04:35.113 10:44:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:35.113 10:44:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.113 ************************************ 00:04:35.113 END TEST non_locking_app_on_locked_coremask 00:04:35.113 ************************************ 00:04:35.113 10:44:51 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:35.113 10:44:51 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:35.113 10:44:51 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:35.113 10:44:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:35.113 ************************************ 00:04:35.113 START TEST locking_app_on_unlocked_coremask 00:04:35.113 ************************************ 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_unlocked_coremask 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2677058 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2677058 /var/tmp/spdk.sock 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2677058 ']' 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:35.114 10:44:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:35.114 [2024-05-15 10:44:51.290577] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:35.114 [2024-05-15 10:44:51.290669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677058 ] 00:04:35.114 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.372 [2024-05-15 10:44:51.364047] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:35.372 [2024-05-15 10:44:51.364083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.372 [2024-05-15 10:44:51.479406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2677194 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2677194 /var/tmp/spdk2.sock 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2677194 ']' 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:36.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:36.307 10:44:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:36.307 [2024-05-15 10:44:52.274525] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:36.307 [2024-05-15 10:44:52.274622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677194 ] 00:04:36.307 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.307 [2024-05-15 10:44:52.387806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.564 [2024-05-15 10:44:52.625708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.128 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:37.128 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:37.128 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2677194 00:04:37.129 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2677194 00:04:37.129 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:37.385 lslocks: write error 00:04:37.385 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2677058 00:04:37.385 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2677058 ']' 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2677058 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2677058 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2677058' 00:04:37.642 killing process with pid 2677058 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2677058 00:04:37.642 10:44:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2677058 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2677194 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2677194 ']' 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # kill -0 2677194 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2677194 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2677194' 00:04:38.575 killing process with pid 2677194 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # kill 2677194 00:04:38.575 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # wait 2677194 00:04:38.833 00:04:38.833 real 0m3.755s 00:04:38.833 user 0m4.066s 00:04:38.833 sys 0m1.093s 00:04:38.833 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:38.833 10:44:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:38.833 ************************************ 00:04:38.833 END TEST locking_app_on_unlocked_coremask 00:04:38.833 ************************************ 00:04:38.833 10:44:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:38.833 10:44:55 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:38.833 10:44:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:38.833 10:44:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:38.833 ************************************ 00:04:38.833 START TEST locking_app_on_locked_coremask 00:04:38.833 ************************************ 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1121 -- # locking_app_on_locked_coremask 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2677529 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2677529 /var/tmp/spdk.sock 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2677529 ']' 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:38.833 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.091 [2024-05-15 10:44:55.098871] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:39.091 [2024-05-15 10:44:55.098970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677529 ] 00:04:39.091 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.091 [2024-05-15 10:44:55.170733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.091 [2024-05-15 10:44:55.290409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2677636 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2677636 /var/tmp/spdk2.sock 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2677636 /var/tmp/spdk2.sock 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2677636 /var/tmp/spdk2.sock 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@827 -- # '[' -z 2677636 ']' 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:39.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:39.349 10:44:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:39.607 [2024-05-15 10:44:55.608325] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:39.607 [2024-05-15 10:44:55.608396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677636 ] 00:04:39.607 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.607 [2024-05-15 10:44:55.718807] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2677529 has claimed it. 00:04:39.607 [2024-05-15 10:44:55.718866] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:40.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2677636) - No such process 00:04:40.172 ERROR: process (pid: 2677636) is no longer running 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2677529 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2677529 00:04:40.172 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:40.430 lslocks: write error 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2677529 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@946 -- # '[' -z 2677529 ']' 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # kill -0 2677529 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # uname 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2677529 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2677529' 00:04:40.430 killing process with pid 2677529 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # kill 2677529 00:04:40.430 10:44:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # wait 2677529 00:04:40.996 00:04:40.996 real 0m2.016s 00:04:40.996 user 0m2.142s 00:04:40.996 sys 0m0.645s 00:04:40.996 10:44:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:40.996 10:44:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.996 ************************************ 00:04:40.996 END TEST locking_app_on_locked_coremask 00:04:40.996 ************************************ 00:04:40.996 10:44:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:40.996 10:44:57 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:40.996 10:44:57 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:40.996 10:44:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:40.996 ************************************ 00:04:40.996 START TEST locking_overlapped_coremask 00:04:40.996 ************************************ 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2677798 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2677798 /var/tmp/spdk.sock 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2677798 ']' 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:40.996 10:44:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:40.996 [2024-05-15 10:44:57.174076] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:40.996 [2024-05-15 10:44:57.174167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677798 ] 00:04:40.996 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.254 [2024-05-15 10:44:57.245449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:41.254 [2024-05-15 10:44:57.367171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.254 [2024-05-15 10:44:57.367249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:41.254 [2024-05-15 10:44:57.367253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.188 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 0 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2677936 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2677936 /var/tmp/spdk2.sock 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 2677936 /var/tmp/spdk2.sock 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 2677936 /var/tmp/spdk2.sock 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@827 -- # '[' -z 2677936 ']' 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:42.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:42.189 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:42.189 [2024-05-15 10:44:58.163669] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:42.189 [2024-05-15 10:44:58.163751] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2677936 ] 00:04:42.189 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.189 [2024-05-15 10:44:58.263636] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2677798 has claimed it. 00:04:42.189 [2024-05-15 10:44:58.263705] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:42.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 842: kill: (2677936) - No such process 00:04:42.755 ERROR: process (pid: 2677936) is no longer running 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # return 1 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2677798 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@946 -- # '[' -z 2677798 ']' 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # kill -0 2677798 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # uname 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2677798 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2677798' 00:04:42.755 killing process with pid 2677798 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # kill 2677798 00:04:42.755 10:44:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # wait 2677798 00:04:43.322 00:04:43.322 real 0m2.235s 00:04:43.322 user 0m6.220s 00:04:43.322 sys 0m0.511s 00:04:43.322 10:44:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:43.322 10:44:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:43.322 ************************************ 00:04:43.322 END TEST locking_overlapped_coremask 00:04:43.322 ************************************ 00:04:43.322 10:44:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:43.322 10:44:59 event.cpu_locks -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:43.322 10:44:59 event.cpu_locks -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:43.322 10:44:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:43.322 ************************************ 00:04:43.322 START TEST locking_overlapped_coremask_via_rpc 00:04:43.322 ************************************ 00:04:43.322 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1121 -- # locking_overlapped_coremask_via_rpc 00:04:43.322 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2678108 00:04:43.323 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:43.323 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2678108 /var/tmp/spdk.sock 00:04:43.323 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2678108 ']' 00:04:43.323 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.323 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:43.323 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.323 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:43.323 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.323 [2024-05-15 10:44:59.463609] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:43.323 [2024-05-15 10:44:59.463702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678108 ] 00:04:43.323 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.323 [2024-05-15 10:44:59.533030] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.323 [2024-05-15 10:44:59.533068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:43.581 [2024-05-15 10:44:59.645023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.581 [2024-05-15 10:44:59.647950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:43.581 [2024-05-15 10:44:59.647962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2678236 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2678236 /var/tmp/spdk2.sock 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2678236 ']' 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:43.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:43.839 10:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.839 [2024-05-15 10:44:59.938690] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:43.840 [2024-05-15 10:44:59.938788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678236 ] 00:04:43.840 EAL: No free 2048 kB hugepages reported on node 1 00:04:43.840 [2024-05-15 10:45:00.041260] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:43.840 [2024-05-15 10:45:00.041317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:44.098 [2024-05-15 10:45:00.270061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:44.098 [2024-05-15 10:45:00.270123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:04:44.098 [2024-05-15 10:45:00.270125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.663 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:44.663 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:44.921 [2024-05-15 10:45:00.916035] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2678108 has claimed it. 00:04:44.921 request: 00:04:44.921 { 00:04:44.921 "method": "framework_enable_cpumask_locks", 00:04:44.921 "req_id": 1 00:04:44.921 } 00:04:44.921 Got JSON-RPC error response 00:04:44.921 response: 00:04:44.921 { 00:04:44.921 "code": -32603, 00:04:44.921 "message": "Failed to claim CPU core: 2" 00:04:44.921 } 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2678108 /var/tmp/spdk.sock 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2678108 ']' 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:44.921 10:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2678236 /var/tmp/spdk2.sock 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@827 -- # '[' -z 2678236 ']' 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:45.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:45.179 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.437 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:45.437 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # return 0 00:04:45.437 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:45.437 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:45.437 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:45.437 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:45.437 00:04:45.437 real 0m2.030s 00:04:45.437 user 0m1.053s 00:04:45.437 sys 0m0.185s 00:04:45.437 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:45.437 10:45:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.437 ************************************ 00:04:45.437 END TEST locking_overlapped_coremask_via_rpc 00:04:45.437 ************************************ 00:04:45.437 10:45:01 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:45.437 10:45:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2678108 ]] 00:04:45.437 10:45:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2678108 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2678108 ']' 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2678108 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2678108 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2678108' 00:04:45.437 killing process with pid 2678108 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2678108 00:04:45.437 10:45:01 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2678108 00:04:45.695 10:45:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2678236 ]] 00:04:45.695 10:45:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2678236 00:04:45.695 10:45:01 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2678236 ']' 00:04:45.695 10:45:01 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2678236 00:04:45.695 10:45:01 event.cpu_locks -- common/autotest_common.sh@951 -- # uname 00:04:45.953 10:45:01 event.cpu_locks -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:45.953 10:45:01 event.cpu_locks -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2678236 00:04:45.953 10:45:01 event.cpu_locks -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:04:45.953 10:45:01 event.cpu_locks -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:04:45.953 10:45:01 event.cpu_locks -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2678236' 00:04:45.953 killing process with pid 2678236 00:04:45.953 10:45:01 event.cpu_locks -- common/autotest_common.sh@965 -- # kill 2678236 00:04:45.953 10:45:01 event.cpu_locks -- common/autotest_common.sh@970 -- # wait 2678236 00:04:46.210 10:45:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:46.210 10:45:02 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:46.210 10:45:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2678108 ]] 00:04:46.210 10:45:02 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2678108 00:04:46.210 10:45:02 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2678108 ']' 00:04:46.210 10:45:02 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2678108 00:04:46.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2678108) - No such process 00:04:46.210 10:45:02 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2678108 is not found' 00:04:46.210 Process with pid 2678108 is not found 00:04:46.210 10:45:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2678236 ]] 00:04:46.210 10:45:02 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2678236 00:04:46.210 10:45:02 event.cpu_locks -- common/autotest_common.sh@946 -- # '[' -z 2678236 ']' 00:04:46.210 10:45:02 event.cpu_locks -- common/autotest_common.sh@950 -- # kill -0 2678236 00:04:46.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2678236) - No such process 00:04:46.210 10:45:02 event.cpu_locks -- common/autotest_common.sh@973 -- # echo 'Process with pid 2678236 is not found' 00:04:46.210 Process with pid 2678236 is not found 00:04:46.210 10:45:02 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:46.210 00:04:46.210 real 0m17.944s 00:04:46.210 user 0m31.200s 00:04:46.210 sys 0m5.574s 00:04:46.210 10:45:02 event.cpu_locks -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.211 10:45:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:46.211 ************************************ 00:04:46.211 END TEST cpu_locks 00:04:46.211 ************************************ 00:04:46.211 00:04:46.211 real 0m44.026s 00:04:46.211 user 1m22.675s 00:04:46.211 sys 0m9.859s 00:04:46.211 10:45:02 event -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:46.211 10:45:02 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.211 ************************************ 00:04:46.211 END TEST event 00:04:46.211 ************************************ 00:04:46.499 10:45:02 -- spdk/autotest.sh@191 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.499 10:45:02 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:46.499 10:45:02 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.499 10:45:02 -- common/autotest_common.sh@10 -- # set +x 00:04:46.499 ************************************ 00:04:46.499 START TEST thread 00:04:46.499 ************************************ 00:04:46.499 10:45:02 thread -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:46.499 * Looking for test storage... 00:04:46.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:46.499 10:45:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.499 10:45:02 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:04:46.499 10:45:02 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:46.499 10:45:02 thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.499 ************************************ 00:04:46.499 START TEST thread_poller_perf 00:04:46.499 ************************************ 00:04:46.499 10:45:02 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:46.499 [2024-05-15 10:45:02.583580] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:46.499 [2024-05-15 10:45:02.583646] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678720 ] 00:04:46.499 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.499 [2024-05-15 10:45:02.657497] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.760 [2024-05-15 10:45:02.774739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.760 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:47.713 ====================================== 00:04:47.713 busy:2715883007 (cyc) 00:04:47.713 total_run_count: 296000 00:04:47.713 tsc_hz: 2700000000 (cyc) 00:04:47.713 ====================================== 00:04:47.713 poller_cost: 9175 (cyc), 3398 (nsec) 00:04:47.713 00:04:47.713 real 0m1.337s 00:04:47.713 user 0m1.242s 00:04:47.713 sys 0m0.089s 00:04:47.713 10:45:03 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:47.713 10:45:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:47.713 ************************************ 00:04:47.713 END TEST thread_poller_perf 00:04:47.713 ************************************ 00:04:47.713 10:45:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.713 10:45:03 thread -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:04:47.713 10:45:03 thread -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:47.713 10:45:03 thread -- common/autotest_common.sh@10 -- # set +x 00:04:47.970 ************************************ 00:04:47.970 START TEST thread_poller_perf 00:04:47.970 ************************************ 00:04:47.970 10:45:03 thread.thread_poller_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:47.970 [2024-05-15 10:45:03.972191] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:47.970 [2024-05-15 10:45:03.972271] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2678875 ] 00:04:47.970 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.970 [2024-05-15 10:45:04.049248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.970 [2024-05-15 10:45:04.167428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.970 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:49.338 ====================================== 00:04:49.338 busy:2702860398 (cyc) 00:04:49.338 total_run_count: 3832000 00:04:49.338 tsc_hz: 2700000000 (cyc) 00:04:49.338 ====================================== 00:04:49.338 poller_cost: 705 (cyc), 261 (nsec) 00:04:49.338 00:04:49.338 real 0m1.329s 00:04:49.338 user 0m1.231s 00:04:49.338 sys 0m0.091s 00:04:49.338 10:45:05 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.338 10:45:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.338 ************************************ 00:04:49.338 END TEST thread_poller_perf 00:04:49.338 ************************************ 00:04:49.338 10:45:05 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:49.338 00:04:49.338 real 0m2.827s 00:04:49.338 user 0m2.540s 00:04:49.338 sys 0m0.282s 00:04:49.338 10:45:05 thread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:49.338 10:45:05 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.338 ************************************ 00:04:49.338 END TEST thread 00:04:49.339 ************************************ 00:04:49.339 10:45:05 -- spdk/autotest.sh@192 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:49.339 10:45:05 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:04:49.339 10:45:05 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:49.339 10:45:05 -- common/autotest_common.sh@10 -- # set +x 00:04:49.339 ************************************ 00:04:49.339 START TEST accel 00:04:49.339 ************************************ 00:04:49.339 10:45:05 accel -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:04:49.339 * Looking for test storage... 00:04:49.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:04:49.339 10:45:05 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:49.339 10:45:05 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:49.339 10:45:05 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:49.339 10:45:05 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=2679186 00:04:49.339 10:45:05 accel -- accel/accel.sh@63 -- # waitforlisten 2679186 00:04:49.339 10:45:05 accel -- common/autotest_common.sh@827 -- # '[' -z 2679186 ']' 00:04:49.339 10:45:05 accel -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.339 10:45:05 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:04:49.339 10:45:05 accel -- common/autotest_common.sh@832 -- # local max_retries=100 00:04:49.339 10:45:05 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:49.339 10:45:05 accel -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.339 10:45:05 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:49.339 10:45:05 accel -- common/autotest_common.sh@836 -- # xtrace_disable 00:04:49.339 10:45:05 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:49.339 10:45:05 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.339 10:45:05 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:49.339 10:45:05 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:49.339 10:45:05 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:49.339 10:45:05 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:49.339 10:45:05 accel -- accel/accel.sh@41 -- # jq -r . 00:04:49.339 [2024-05-15 10:45:05.458757] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:49.339 [2024-05-15 10:45:05.458833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679186 ] 00:04:49.339 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.339 [2024-05-15 10:45:05.530985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.597 [2024-05-15 10:45:05.647095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@860 -- # return 0 00:04:49.855 10:45:05 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:49.855 10:45:05 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:49.855 10:45:05 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:49.855 10:45:05 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:49.855 10:45:05 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:49.855 10:45:05 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.855 10:45:05 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # IFS== 00:04:49.855 10:45:05 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:49.855 10:45:05 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:49.855 10:45:05 accel -- accel/accel.sh@75 -- # killprocess 2679186 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@946 -- # '[' -z 2679186 ']' 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@950 -- # kill -0 2679186 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@951 -- # uname 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2679186 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2679186' 00:04:49.855 killing process with pid 2679186 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@965 -- # kill 2679186 00:04:49.855 10:45:05 accel -- common/autotest_common.sh@970 -- # wait 2679186 00:04:50.421 10:45:06 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:50.421 10:45:06 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:50.421 10:45:06 accel -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:04:50.421 10:45:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.421 10:45:06 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.421 10:45:06 accel.accel_help -- common/autotest_common.sh@1121 -- # accel_perf -h 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:04:50.421 10:45:06 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:04:50.421 10:45:06 accel.accel_help -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.421 10:45:06 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:50.421 10:45:06 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:50.421 10:45:06 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:04:50.421 10:45:06 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.421 10:45:06 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.421 ************************************ 00:04:50.421 START TEST accel_missing_filename 00:04:50.421 ************************************ 00:04:50.421 10:45:06 accel.accel_missing_filename -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress 00:04:50.421 10:45:06 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:50.421 10:45:06 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:50.421 10:45:06 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:50.421 10:45:06 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.421 10:45:06 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:50.421 10:45:06 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.421 10:45:06 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:50.421 10:45:06 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:50.421 [2024-05-15 10:45:06.565133] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:50.421 [2024-05-15 10:45:06.565191] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679360 ] 00:04:50.421 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.421 [2024-05-15 10:45:06.639471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.679 [2024-05-15 10:45:06.757334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.679 [2024-05-15 10:45:06.819491] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:50.679 [2024-05-15 10:45:06.904891] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:04:50.937 A filename is required. 00:04:50.937 10:45:07 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:50.937 10:45:07 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:50.937 10:45:07 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:50.937 10:45:07 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:50.937 10:45:07 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:50.937 10:45:07 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:50.937 00:04:50.937 real 0m0.482s 00:04:50.937 user 0m0.362s 00:04:50.937 sys 0m0.153s 00:04:50.937 10:45:07 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:50.937 10:45:07 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:50.937 ************************************ 00:04:50.937 END TEST accel_missing_filename 00:04:50.937 ************************************ 00:04:50.937 10:45:07 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.937 10:45:07 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:04:50.937 10:45:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:50.937 10:45:07 accel -- common/autotest_common.sh@10 -- # set +x 00:04:50.937 ************************************ 00:04:50.937 START TEST accel_compress_verify 00:04:50.937 ************************************ 00:04:50.937 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.937 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:50.937 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.937 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:50.937 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.937 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:50.937 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:50.937 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.937 10:45:07 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:04:50.937 10:45:07 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:50.938 10:45:07 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.938 10:45:07 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.938 10:45:07 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.938 10:45:07 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.938 10:45:07 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.938 10:45:07 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:50.938 10:45:07 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:50.938 [2024-05-15 10:45:07.094029] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:50.938 [2024-05-15 10:45:07.094087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679392 ] 00:04:50.938 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.938 [2024-05-15 10:45:07.167312] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.195 [2024-05-15 10:45:07.285454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.195 [2024-05-15 10:45:07.344902] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:51.454 [2024-05-15 10:45:07.428672] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:04:51.454 00:04:51.454 Compression does not support the verify option, aborting. 00:04:51.454 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:04:51.454 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.454 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:04:51.454 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:51.454 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:51.454 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.454 00:04:51.454 real 0m0.472s 00:04:51.454 user 0m0.354s 00:04:51.454 sys 0m0.149s 00:04:51.454 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.454 10:45:07 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:51.454 ************************************ 00:04:51.454 END TEST accel_compress_verify 00:04:51.454 ************************************ 00:04:51.454 10:45:07 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:51.454 10:45:07 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:04:51.454 10:45:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.454 10:45:07 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.454 ************************************ 00:04:51.454 START TEST accel_wrong_workload 00:04:51.454 ************************************ 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w foobar 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:04:51.454 10:45:07 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:04:51.454 Unsupported workload type: foobar 00:04:51.454 [2024-05-15 10:45:07.615525] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:51.454 accel_perf options: 00:04:51.454 [-h help message] 00:04:51.454 [-q queue depth per core] 00:04:51.454 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:51.454 [-T number of threads per core 00:04:51.454 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:51.454 [-t time in seconds] 00:04:51.454 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:51.454 [ dif_verify, , dif_generate, dif_generate_copy 00:04:51.454 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:51.454 [-l for compress/decompress workloads, name of uncompressed input file 00:04:51.454 [-S for crc32c workload, use this seed value (default 0) 00:04:51.454 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:51.454 [-f for fill workload, use this BYTE value (default 255) 00:04:51.454 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:51.454 [-y verify result if this switch is on] 00:04:51.454 [-a tasks to allocate per core (default: same value as -q)] 00:04:51.454 Can be used to spread operations across a wider range of memory. 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.454 00:04:51.454 real 0m0.022s 00:04:51.454 user 0m0.015s 00:04:51.454 sys 0m0.007s 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.454 10:45:07 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:51.454 ************************************ 00:04:51.454 END TEST accel_wrong_workload 00:04:51.454 ************************************ 00:04:51.454 Error: writing output failed: Broken pipe 00:04:51.454 10:45:07 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:51.454 10:45:07 accel -- common/autotest_common.sh@1097 -- # '[' 10 -le 1 ']' 00:04:51.454 10:45:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.455 10:45:07 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.455 ************************************ 00:04:51.455 START TEST accel_negative_buffers 00:04:51.455 ************************************ 00:04:51.455 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@1121 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:51.455 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:51.455 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:51.455 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:51.455 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.455 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:51.455 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:51.455 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:04:51.455 10:45:07 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:04:51.455 -x option must be non-negative. 00:04:51.455 [2024-05-15 10:45:07.683078] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:51.714 accel_perf options: 00:04:51.714 [-h help message] 00:04:51.714 [-q queue depth per core] 00:04:51.714 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:51.714 [-T number of threads per core 00:04:51.714 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:51.714 [-t time in seconds] 00:04:51.714 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:51.714 [ dif_verify, , dif_generate, dif_generate_copy 00:04:51.714 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:51.714 [-l for compress/decompress workloads, name of uncompressed input file 00:04:51.714 [-S for crc32c workload, use this seed value (default 0) 00:04:51.714 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:51.714 [-f for fill workload, use this BYTE value (default 255) 00:04:51.714 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:51.714 [-y verify result if this switch is on] 00:04:51.714 [-a tasks to allocate per core (default: same value as -q)] 00:04:51.714 Can be used to spread operations across a wider range of memory. 00:04:51.714 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:51.714 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:51.714 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:51.714 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:51.714 00:04:51.714 real 0m0.021s 00:04:51.714 user 0m0.014s 00:04:51.714 sys 0m0.007s 00:04:51.714 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:51.714 10:45:07 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:51.714 ************************************ 00:04:51.714 END TEST accel_negative_buffers 00:04:51.714 ************************************ 00:04:51.715 10:45:07 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:51.715 10:45:07 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:04:51.715 10:45:07 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:51.715 10:45:07 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.715 Error: writing output failed: Broken pipe 00:04:51.715 ************************************ 00:04:51.715 START TEST accel_crc32c 00:04:51.715 ************************************ 00:04:51.715 10:45:07 accel.accel_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:51.715 10:45:07 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:51.715 [2024-05-15 10:45:07.749142] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:51.715 [2024-05-15 10:45:07.749197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2679821 ] 00:04:51.715 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.715 [2024-05-15 10:45:07.827954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.715 [2024-05-15 10:45:07.942679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.973 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.973 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.973 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 10:45:08 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:53.349 10:45:09 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:53.349 00:04:53.349 real 0m1.470s 00:04:53.349 user 0m1.317s 00:04:53.349 sys 0m0.154s 00:04:53.349 10:45:09 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:53.349 10:45:09 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:53.349 ************************************ 00:04:53.349 END TEST accel_crc32c 00:04:53.349 ************************************ 00:04:53.349 10:45:09 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:53.349 10:45:09 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:04:53.349 10:45:09 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:53.349 10:45:09 accel -- common/autotest_common.sh@10 -- # set +x 00:04:53.349 ************************************ 00:04:53.349 START TEST accel_crc32c_C2 00:04:53.349 ************************************ 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:53.349 [2024-05-15 10:45:09.265689] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:53.349 [2024-05-15 10:45:09.265754] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680236 ] 00:04:53.349 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.349 [2024-05-15 10:45:09.336445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.349 [2024-05-15 10:45:09.453366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.349 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:53.350 10:45:09 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:54.723 00:04:54.723 real 0m1.482s 00:04:54.723 user 0m1.332s 00:04:54.723 sys 0m0.152s 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:54.723 10:45:10 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:54.723 ************************************ 00:04:54.723 END TEST accel_crc32c_C2 00:04:54.723 ************************************ 00:04:54.723 10:45:10 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:54.723 10:45:10 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:04:54.723 10:45:10 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:54.723 10:45:10 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.723 ************************************ 00:04:54.723 START TEST accel_copy 00:04:54.723 ************************************ 00:04:54.723 10:45:10 accel.accel_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy -y 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:54.723 10:45:10 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:54.723 [2024-05-15 10:45:10.795904] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:54.723 [2024-05-15 10:45:10.795974] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680399 ] 00:04:54.723 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.723 [2024-05-15 10:45:10.869526] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.981 [2024-05-15 10:45:10.987875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:54.981 10:45:11 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:54.982 10:45:11 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:54.982 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:54.982 10:45:11 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:56.355 10:45:12 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.355 00:04:56.355 real 0m1.482s 00:04:56.355 user 0m1.333s 00:04:56.355 sys 0m0.151s 00:04:56.355 10:45:12 accel.accel_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:56.355 10:45:12 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:56.355 ************************************ 00:04:56.355 END TEST accel_copy 00:04:56.355 ************************************ 00:04:56.355 10:45:12 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:56.355 10:45:12 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:04:56.355 10:45:12 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:56.355 10:45:12 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.355 ************************************ 00:04:56.355 START TEST accel_fill 00:04:56.355 ************************************ 00:04:56.355 10:45:12 accel.accel_fill -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:56.355 [2024-05-15 10:45:12.329497] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:56.355 [2024-05-15 10:45:12.329559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680665 ] 00:04:56.355 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.355 [2024-05-15 10:45:12.402803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.355 [2024-05-15 10:45:12.519238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.355 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:56.613 10:45:12 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:57.983 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:13 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:57.984 10:45:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:57.984 10:45:13 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:57.984 00:04:57.984 real 0m1.479s 00:04:57.984 user 0m1.331s 00:04:57.984 sys 0m0.150s 00:04:57.984 10:45:13 accel.accel_fill -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:57.984 10:45:13 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:57.984 ************************************ 00:04:57.984 END TEST accel_fill 00:04:57.984 ************************************ 00:04:57.984 10:45:13 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:57.984 10:45:13 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:04:57.984 10:45:13 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:57.984 10:45:13 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.984 ************************************ 00:04:57.984 START TEST accel_copy_crc32c 00:04:57.984 ************************************ 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:57.984 10:45:13 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:57.984 [2024-05-15 10:45:13.856894] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:57.984 [2024-05-15 10:45:13.856966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680834 ] 00:04:57.984 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.984 [2024-05-15 10:45:13.934846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.984 [2024-05-15 10:45:14.053664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.984 10:45:14 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.364 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.365 00:04:59.365 real 0m1.493s 00:04:59.365 user 0m1.345s 00:04:59.365 sys 0m0.151s 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # xtrace_disable 00:04:59.365 10:45:15 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:59.365 ************************************ 00:04:59.365 END TEST accel_copy_crc32c 00:04:59.365 ************************************ 00:04:59.365 10:45:15 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:59.365 10:45:15 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:04:59.365 10:45:15 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:04:59.365 10:45:15 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.365 ************************************ 00:04:59.365 START TEST accel_copy_crc32c_C2 00:04:59.365 ************************************ 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:59.365 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:59.365 [2024-05-15 10:45:15.400318] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:04:59.365 [2024-05-15 10:45:15.400387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2680985 ] 00:04:59.365 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.365 [2024-05-15 10:45:15.475992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.365 [2024-05-15 10:45:15.594545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:59.624 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.625 10:45:15 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.001 00:05:01.001 real 0m1.486s 00:05:01.001 user 0m1.336s 00:05:01.001 sys 0m0.152s 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:01.001 10:45:16 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:01.001 ************************************ 00:05:01.002 END TEST accel_copy_crc32c_C2 00:05:01.002 ************************************ 00:05:01.002 10:45:16 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:01.002 10:45:16 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:01.002 10:45:16 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:01.002 10:45:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.002 ************************************ 00:05:01.002 START TEST accel_dualcast 00:05:01.002 ************************************ 00:05:01.002 10:45:16 accel.accel_dualcast -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dualcast -y 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:01.002 10:45:16 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:01.002 [2024-05-15 10:45:16.937633] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:01.002 [2024-05-15 10:45:16.937689] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681261 ] 00:05:01.002 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.002 [2024-05-15 10:45:17.016732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.002 [2024-05-15 10:45:17.130999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:01.002 10:45:17 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:02.416 10:45:18 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.416 00:05:02.416 real 0m1.481s 00:05:02.416 user 0m1.326s 00:05:02.416 sys 0m0.157s 00:05:02.416 10:45:18 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:02.416 10:45:18 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:02.416 ************************************ 00:05:02.416 END TEST accel_dualcast 00:05:02.416 ************************************ 00:05:02.416 10:45:18 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:02.416 10:45:18 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:02.416 10:45:18 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:02.416 10:45:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.416 ************************************ 00:05:02.416 START TEST accel_compare 00:05:02.416 ************************************ 00:05:02.416 10:45:18 accel.accel_compare -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compare -y 00:05:02.416 10:45:18 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:02.416 10:45:18 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:02.416 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.416 10:45:18 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:02.416 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.416 10:45:18 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:02.417 10:45:18 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:02.417 10:45:18 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.417 10:45:18 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.417 10:45:18 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.417 10:45:18 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.417 10:45:18 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.417 10:45:18 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:02.417 10:45:18 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:02.417 [2024-05-15 10:45:18.471116] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:02.417 [2024-05-15 10:45:18.471186] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681422 ] 00:05:02.417 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.417 [2024-05-15 10:45:18.548269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.699 [2024-05-15 10:45:18.668022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.699 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:02.700 10:45:18 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.072 10:45:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:04.073 10:45:19 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.073 00:05:04.073 real 0m1.492s 00:05:04.073 user 0m1.337s 00:05:04.073 sys 0m0.156s 00:05:04.073 10:45:19 accel.accel_compare -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:04.073 10:45:19 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:04.073 ************************************ 00:05:04.073 END TEST accel_compare 00:05:04.073 ************************************ 00:05:04.073 10:45:19 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:04.073 10:45:19 accel -- common/autotest_common.sh@1097 -- # '[' 7 -le 1 ']' 00:05:04.073 10:45:19 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:04.073 10:45:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.073 ************************************ 00:05:04.073 START TEST accel_xor 00:05:04.073 ************************************ 00:05:04.073 10:45:19 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:04.073 10:45:19 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:04.073 [2024-05-15 10:45:20.013892] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:04.073 [2024-05-15 10:45:20.014000] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681615 ] 00:05:04.073 EAL: No free 2048 kB hugepages reported on node 1 00:05:04.073 [2024-05-15 10:45:20.092036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.073 [2024-05-15 10:45:20.210409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:04.073 10:45:20 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:05.447 00:05:05.447 real 0m1.500s 00:05:05.447 user 0m1.357s 00:05:05.447 sys 0m0.146s 00:05:05.447 10:45:21 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:05.447 10:45:21 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:05.447 ************************************ 00:05:05.447 END TEST accel_xor 00:05:05.447 ************************************ 00:05:05.447 10:45:21 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:05.447 10:45:21 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:05.447 10:45:21 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:05.447 10:45:21 accel -- common/autotest_common.sh@10 -- # set +x 00:05:05.447 ************************************ 00:05:05.447 START TEST accel_xor 00:05:05.447 ************************************ 00:05:05.447 10:45:21 accel.accel_xor -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w xor -y -x 3 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:05.447 10:45:21 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:05.447 [2024-05-15 10:45:21.573057] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:05.447 [2024-05-15 10:45:21.573121] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2681850 ] 00:05:05.447 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.447 [2024-05-15 10:45:21.648770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.704 [2024-05-15 10:45:21.769603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:05.704 10:45:21 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:07.076 10:45:23 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.076 00:05:07.076 real 0m1.503s 00:05:07.076 user 0m1.347s 00:05:07.076 sys 0m0.157s 00:05:07.076 10:45:23 accel.accel_xor -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:07.076 10:45:23 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:07.076 ************************************ 00:05:07.076 END TEST accel_xor 00:05:07.076 ************************************ 00:05:07.076 10:45:23 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:07.077 10:45:23 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:07.077 10:45:23 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:07.077 10:45:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.077 ************************************ 00:05:07.077 START TEST accel_dif_verify 00:05:07.077 ************************************ 00:05:07.077 10:45:23 accel.accel_dif_verify -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_verify 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:07.077 10:45:23 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:07.077 [2024-05-15 10:45:23.126190] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:07.077 [2024-05-15 10:45:23.126254] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682017 ] 00:05:07.077 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.077 [2024-05-15 10:45:23.200454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.335 [2024-05-15 10:45:23.324192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:07.335 10:45:23 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:08.710 10:45:24 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:08.710 00:05:08.710 real 0m1.498s 00:05:08.710 user 0m1.349s 00:05:08.710 sys 0m0.153s 00:05:08.710 10:45:24 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:08.710 10:45:24 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:08.710 ************************************ 00:05:08.710 END TEST accel_dif_verify 00:05:08.710 ************************************ 00:05:08.710 10:45:24 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:08.710 10:45:24 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:08.710 10:45:24 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:08.710 10:45:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:08.710 ************************************ 00:05:08.710 START TEST accel_dif_generate 00:05:08.710 ************************************ 00:05:08.710 10:45:24 accel.accel_dif_generate -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:08.710 [2024-05-15 10:45:24.682056] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:08.710 [2024-05-15 10:45:24.682122] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682283 ] 00:05:08.710 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.710 [2024-05-15 10:45:24.756570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.710 [2024-05-15 10:45:24.879480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.710 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:08.968 10:45:24 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.341 10:45:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:10.342 10:45:26 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.342 00:05:10.342 real 0m1.500s 00:05:10.342 user 0m1.353s 00:05:10.342 sys 0m0.151s 00:05:10.342 10:45:26 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:10.342 10:45:26 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:10.342 ************************************ 00:05:10.342 END TEST accel_dif_generate 00:05:10.342 ************************************ 00:05:10.342 10:45:26 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:10.342 10:45:26 accel -- common/autotest_common.sh@1097 -- # '[' 6 -le 1 ']' 00:05:10.342 10:45:26 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:10.342 10:45:26 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.342 ************************************ 00:05:10.342 START TEST accel_dif_generate_copy 00:05:10.342 ************************************ 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w dif_generate_copy 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:10.342 [2024-05-15 10:45:26.235623] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:10.342 [2024-05-15 10:45:26.235692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682446 ] 00:05:10.342 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.342 [2024-05-15 10:45:26.310744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.342 [2024-05-15 10:45:26.428915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.342 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:10.343 10:45:26 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.715 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.716 00:05:11.716 real 0m1.496s 00:05:11.716 user 0m1.344s 00:05:11.716 sys 0m0.155s 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:11.716 10:45:27 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:11.716 ************************************ 00:05:11.716 END TEST accel_dif_generate_copy 00:05:11.716 ************************************ 00:05:11.716 10:45:27 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:11.716 10:45:27 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.716 10:45:27 accel -- common/autotest_common.sh@1097 -- # '[' 8 -le 1 ']' 00:05:11.716 10:45:27 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:11.716 10:45:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.716 ************************************ 00:05:11.716 START TEST accel_comp 00:05:11.716 ************************************ 00:05:11.716 10:45:27 accel.accel_comp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:11.716 10:45:27 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:11.716 [2024-05-15 10:45:27.784750] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:11.716 [2024-05-15 10:45:27.784814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682609 ] 00:05:11.716 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.716 [2024-05-15 10:45:27.861174] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.973 [2024-05-15 10:45:27.984099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.973 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:11.974 10:45:28 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:13.345 10:45:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.345 00:05:13.345 real 0m1.496s 00:05:13.345 user 0m1.344s 00:05:13.345 sys 0m0.155s 00:05:13.345 10:45:29 accel.accel_comp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:13.345 10:45:29 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:13.345 ************************************ 00:05:13.345 END TEST accel_comp 00:05:13.345 ************************************ 00:05:13.345 10:45:29 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.345 10:45:29 accel -- common/autotest_common.sh@1097 -- # '[' 9 -le 1 ']' 00:05:13.345 10:45:29 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:13.345 10:45:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:13.345 ************************************ 00:05:13.345 START TEST accel_decomp 00:05:13.345 ************************************ 00:05:13.345 10:45:29 accel.accel_decomp -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:13.345 10:45:29 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:13.345 [2024-05-15 10:45:29.331672] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:13.345 [2024-05-15 10:45:29.331743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2682877 ] 00:05:13.345 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.345 [2024-05-15 10:45:29.410136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.345 [2024-05-15 10:45:29.533113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.602 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:13.603 10:45:29 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:13.603 10:45:29 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:13.603 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:13.603 10:45:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:14.975 10:45:30 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.975 00:05:14.975 real 0m1.505s 00:05:14.975 user 0m1.348s 00:05:14.975 sys 0m0.159s 00:05:14.975 10:45:30 accel.accel_decomp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:14.975 10:45:30 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:14.975 ************************************ 00:05:14.975 END TEST accel_decomp 00:05:14.975 ************************************ 00:05:14.975 10:45:30 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.975 10:45:30 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:14.975 10:45:30 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:14.975 10:45:30 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.975 ************************************ 00:05:14.975 START TEST accel_decmop_full 00:05:14.975 ************************************ 00:05:14.975 10:45:30 accel.accel_decmop_full -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.975 10:45:30 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:05:14.975 10:45:30 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:05:14.975 10:45:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:05:14.976 10:45:30 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:05:14.976 [2024-05-15 10:45:30.888875] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:14.976 [2024-05-15 10:45:30.888951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683040 ] 00:05:14.976 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.976 [2024-05-15 10:45:30.963029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.976 [2024-05-15 10:45:31.089683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=software 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=software 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:14.976 10:45:31 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:16.354 10:45:32 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.354 00:05:16.354 real 0m1.511s 00:05:16.354 user 0m1.354s 00:05:16.354 sys 0m0.160s 00:05:16.354 10:45:32 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:16.354 10:45:32 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:05:16.354 ************************************ 00:05:16.354 END TEST accel_decmop_full 00:05:16.354 ************************************ 00:05:16.354 10:45:32 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:16.354 10:45:32 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:16.354 10:45:32 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:16.354 10:45:32 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.354 ************************************ 00:05:16.354 START TEST accel_decomp_mcore 00:05:16.354 ************************************ 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:16.354 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:16.354 [2024-05-15 10:45:32.457889] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:16.354 [2024-05-15 10:45:32.457965] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683198 ] 00:05:16.354 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.354 [2024-05-15 10:45:32.534631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:16.613 [2024-05-15 10:45:32.658529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.613 [2024-05-15 10:45:32.658582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.613 [2024-05-15 10:45:32.658634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.613 [2024-05-15 10:45:32.658638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:16.613 10:45:32 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.034 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.035 00:05:18.035 real 0m1.516s 00:05:18.035 user 0m4.824s 00:05:18.035 sys 0m0.166s 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:18.035 10:45:33 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:18.035 ************************************ 00:05:18.035 END TEST accel_decomp_mcore 00:05:18.035 ************************************ 00:05:18.035 10:45:33 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.035 10:45:33 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:18.035 10:45:33 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:18.035 10:45:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.035 ************************************ 00:05:18.035 START TEST accel_decomp_full_mcore 00:05:18.035 ************************************ 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:18.035 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:18.035 [2024-05-15 10:45:34.025836] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:18.035 [2024-05-15 10:45:34.025899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683473 ] 00:05:18.035 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.035 [2024-05-15 10:45:34.105414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.035 [2024-05-15 10:45:34.231353] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.035 [2024-05-15 10:45:34.231407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.035 [2024-05-15 10:45:34.231460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.035 [2024-05-15 10:45:34.231464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.293 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:18.294 10:45:34 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:19.668 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.669 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:19.669 10:45:35 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.669 00:05:19.669 real 0m1.512s 00:05:19.669 user 0m4.808s 00:05:19.669 sys 0m0.169s 00:05:19.669 10:45:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:19.669 10:45:35 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:19.669 ************************************ 00:05:19.669 END TEST accel_decomp_full_mcore 00:05:19.669 ************************************ 00:05:19.669 10:45:35 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:19.669 10:45:35 accel -- common/autotest_common.sh@1097 -- # '[' 11 -le 1 ']' 00:05:19.669 10:45:35 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:19.669 10:45:35 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.669 ************************************ 00:05:19.669 START TEST accel_decomp_mthread 00:05:19.669 ************************************ 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:19.669 [2024-05-15 10:45:35.591699] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:19.669 [2024-05-15 10:45:35.591764] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683637 ] 00:05:19.669 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.669 [2024-05-15 10:45:35.664211] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.669 [2024-05-15 10:45:35.785916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:19.669 10:45:35 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.040 00:05:21.040 real 0m1.486s 00:05:21.040 user 0m1.339s 00:05:21.040 sys 0m0.151s 00:05:21.040 10:45:37 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:21.041 10:45:37 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:21.041 ************************************ 00:05:21.041 END TEST accel_decomp_mthread 00:05:21.041 ************************************ 00:05:21.041 10:45:37 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.041 10:45:37 accel -- common/autotest_common.sh@1097 -- # '[' 13 -le 1 ']' 00:05:21.041 10:45:37 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:21.041 10:45:37 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.041 ************************************ 00:05:21.041 START TEST accel_decomp_full_mthread 00:05:21.041 ************************************ 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1121 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:21.041 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:21.041 [2024-05-15 10:45:37.128735] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:21.041 [2024-05-15 10:45:37.128798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2683866 ] 00:05:21.041 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.041 [2024-05-15 10:45:37.201944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.299 [2024-05-15 10:45:37.325876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:21.300 10:45:37 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.674 00:05:22.674 real 0m1.537s 00:05:22.674 user 0m1.381s 00:05:22.674 sys 0m0.159s 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:22.674 10:45:38 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:22.674 ************************************ 00:05:22.674 END TEST accel_decomp_full_mthread 00:05:22.674 ************************************ 00:05:22.674 10:45:38 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:22.674 10:45:38 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:22.674 10:45:38 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:22.674 10:45:38 accel -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:22.674 10:45:38 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.674 10:45:38 accel -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:22.674 10:45:38 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.674 10:45:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.674 10:45:38 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.674 10:45:38 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.674 10:45:38 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.674 10:45:38 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:22.674 10:45:38 accel -- accel/accel.sh@41 -- # jq -r . 00:05:22.674 ************************************ 00:05:22.674 START TEST accel_dif_functional_tests 00:05:22.674 ************************************ 00:05:22.674 10:45:38 accel.accel_dif_functional_tests -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:22.674 [2024-05-15 10:45:38.737598] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:22.674 [2024-05-15 10:45:38.737658] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684070 ] 00:05:22.674 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.674 [2024-05-15 10:45:38.809894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:22.932 [2024-05-15 10:45:38.933366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.932 [2024-05-15 10:45:38.933419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:22.932 [2024-05-15 10:45:38.933423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.932 00:05:22.932 00:05:22.932 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.932 http://cunit.sourceforge.net/ 00:05:22.932 00:05:22.932 00:05:22.932 Suite: accel_dif 00:05:22.932 Test: verify: DIF generated, GUARD check ...passed 00:05:22.932 Test: verify: DIF generated, APPTAG check ...passed 00:05:22.932 Test: verify: DIF generated, REFTAG check ...passed 00:05:22.932 Test: verify: DIF not generated, GUARD check ...[2024-05-15 10:45:39.035881] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:22.932 [2024-05-15 10:45:39.035957] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:22.932 passed 00:05:22.932 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 10:45:39.036002] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:22.932 [2024-05-15 10:45:39.036034] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:22.932 passed 00:05:22.932 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 10:45:39.036069] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:22.932 [2024-05-15 10:45:39.036100] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:22.932 passed 00:05:22.932 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:22.932 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 10:45:39.036169] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:22.932 passed 00:05:22.932 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:22.932 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:22.932 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:22.932 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 10:45:39.036345] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:22.932 passed 00:05:22.932 Test: generate copy: DIF generated, GUARD check ...passed 00:05:22.932 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:22.932 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:22.932 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:22.932 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:22.932 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:22.932 Test: generate copy: iovecs-len validate ...[2024-05-15 10:45:39.036607] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:22.932 passed 00:05:22.932 Test: generate copy: buffer alignment validate ...passed 00:05:22.932 00:05:22.932 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.932 suites 1 1 n/a 0 0 00:05:22.932 tests 20 20 20 0 0 00:05:22.932 asserts 204 204 204 0 n/a 00:05:22.932 00:05:22.932 Elapsed time = 0.003 seconds 00:05:23.190 00:05:23.190 real 0m0.606s 00:05:23.190 user 0m0.915s 00:05:23.190 sys 0m0.191s 00:05:23.190 10:45:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.190 10:45:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:23.190 ************************************ 00:05:23.190 END TEST accel_dif_functional_tests 00:05:23.190 ************************************ 00:05:23.190 00:05:23.190 real 0m33.968s 00:05:23.190 user 0m37.103s 00:05:23.190 sys 0m4.867s 00:05:23.190 10:45:39 accel -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:23.190 10:45:39 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.190 ************************************ 00:05:23.190 END TEST accel 00:05:23.190 ************************************ 00:05:23.190 10:45:39 -- spdk/autotest.sh@193 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:23.190 10:45:39 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:23.190 10:45:39 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:23.190 10:45:39 -- common/autotest_common.sh@10 -- # set +x 00:05:23.190 ************************************ 00:05:23.190 START TEST accel_rpc 00:05:23.190 ************************************ 00:05:23.190 10:45:39 accel_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:23.448 * Looking for test storage... 00:05:23.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:23.448 10:45:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:23.448 10:45:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2684263 00:05:23.448 10:45:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:23.449 10:45:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 2684263 00:05:23.449 10:45:39 accel_rpc -- common/autotest_common.sh@827 -- # '[' -z 2684263 ']' 00:05:23.449 10:45:39 accel_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.449 10:45:39 accel_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:23.449 10:45:39 accel_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.449 10:45:39 accel_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:23.449 10:45:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.449 [2024-05-15 10:45:39.485097] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:23.449 [2024-05-15 10:45:39.485182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684263 ] 00:05:23.449 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.449 [2024-05-15 10:45:39.559378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.449 [2024-05-15 10:45:39.679602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.382 10:45:40 accel_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:24.382 10:45:40 accel_rpc -- common/autotest_common.sh@860 -- # return 0 00:05:24.382 10:45:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:24.382 10:45:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:24.382 10:45:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:24.382 10:45:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:24.382 10:45:40 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:24.382 10:45:40 accel_rpc -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:24.382 10:45:40 accel_rpc -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:24.382 10:45:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.382 ************************************ 00:05:24.382 START TEST accel_assign_opcode 00:05:24.382 ************************************ 00:05:24.382 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1121 -- # accel_assign_opcode_test_suite 00:05:24.382 10:45:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:24.382 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.382 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:24.382 [2024-05-15 10:45:40.482162] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:24.382 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.383 10:45:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:24.383 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.383 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:24.383 [2024-05-15 10:45:40.490161] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:24.383 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.383 10:45:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:24.383 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.383 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.643 software 00:05:24.643 00:05:24.643 real 0m0.305s 00:05:24.643 user 0m0.041s 00:05:24.643 sys 0m0.007s 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:24.643 10:45:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:24.643 ************************************ 00:05:24.643 END TEST accel_assign_opcode 00:05:24.643 ************************************ 00:05:24.643 10:45:40 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 2684263 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@946 -- # '[' -z 2684263 ']' 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@950 -- # kill -0 2684263 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@951 -- # uname 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2684263 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2684263' 00:05:24.643 killing process with pid 2684263 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@965 -- # kill 2684263 00:05:24.643 10:45:40 accel_rpc -- common/autotest_common.sh@970 -- # wait 2684263 00:05:25.210 00:05:25.210 real 0m1.929s 00:05:25.210 user 0m2.068s 00:05:25.210 sys 0m0.478s 00:05:25.210 10:45:41 accel_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:25.210 10:45:41 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.210 ************************************ 00:05:25.210 END TEST accel_rpc 00:05:25.210 ************************************ 00:05:25.210 10:45:41 -- spdk/autotest.sh@194 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:25.210 10:45:41 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:25.210 10:45:41 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:25.210 10:45:41 -- common/autotest_common.sh@10 -- # set +x 00:05:25.210 ************************************ 00:05:25.210 START TEST app_cmdline 00:05:25.210 ************************************ 00:05:25.210 10:45:41 app_cmdline -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:25.210 * Looking for test storage... 00:05:25.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:25.210 10:45:41 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:25.210 10:45:41 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2684482 00:05:25.210 10:45:41 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:25.210 10:45:41 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2684482 00:05:25.210 10:45:41 app_cmdline -- common/autotest_common.sh@827 -- # '[' -z 2684482 ']' 00:05:25.210 10:45:41 app_cmdline -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.210 10:45:41 app_cmdline -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:25.210 10:45:41 app_cmdline -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.210 10:45:41 app_cmdline -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:25.210 10:45:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:25.468 [2024-05-15 10:45:41.475367] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:25.468 [2024-05-15 10:45:41.475460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2684482 ] 00:05:25.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.468 [2024-05-15 10:45:41.554427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.468 [2024-05-15 10:45:41.674415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.726 10:45:41 app_cmdline -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:25.726 10:45:41 app_cmdline -- common/autotest_common.sh@860 -- # return 0 00:05:25.726 10:45:41 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:25.983 { 00:05:25.983 "version": "SPDK v24.05-pre git sha1 08ee631f2", 00:05:25.983 "fields": { 00:05:25.983 "major": 24, 00:05:25.983 "minor": 5, 00:05:25.983 "patch": 0, 00:05:25.983 "suffix": "-pre", 00:05:25.983 "commit": "08ee631f2" 00:05:25.983 } 00:05:25.983 } 00:05:25.983 10:45:42 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:25.983 10:45:42 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:25.983 10:45:42 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:25.983 10:45:42 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:25.983 10:45:42 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:25.983 10:45:42 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:26.241 10:45:42 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:26.241 10:45:42 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.241 10:45:42 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:26.241 10:45:42 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:26.241 10:45:42 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:26.241 10:45:42 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:26.499 request: 00:05:26.499 { 00:05:26.499 "method": "env_dpdk_get_mem_stats", 00:05:26.499 "req_id": 1 00:05:26.499 } 00:05:26.499 Got JSON-RPC error response 00:05:26.499 response: 00:05:26.499 { 00:05:26.499 "code": -32601, 00:05:26.499 "message": "Method not found" 00:05:26.499 } 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:26.499 10:45:42 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2684482 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@946 -- # '[' -z 2684482 ']' 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@950 -- # kill -0 2684482 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@951 -- # uname 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2684482 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2684482' 00:05:26.499 killing process with pid 2684482 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@965 -- # kill 2684482 00:05:26.499 10:45:42 app_cmdline -- common/autotest_common.sh@970 -- # wait 2684482 00:05:27.068 00:05:27.068 real 0m1.637s 00:05:27.068 user 0m2.001s 00:05:27.068 sys 0m0.483s 00:05:27.068 10:45:43 app_cmdline -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.068 10:45:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:27.068 ************************************ 00:05:27.068 END TEST app_cmdline 00:05:27.068 ************************************ 00:05:27.068 10:45:43 -- spdk/autotest.sh@195 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:27.068 10:45:43 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:05:27.068 10:45:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.068 10:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:27.068 ************************************ 00:05:27.068 START TEST version 00:05:27.068 ************************************ 00:05:27.068 10:45:43 version -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:27.068 * Looking for test storage... 00:05:27.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:27.068 10:45:43 version -- app/version.sh@17 -- # get_header_version major 00:05:27.068 10:45:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.068 10:45:43 version -- app/version.sh@14 -- # cut -f2 00:05:27.068 10:45:43 version -- app/version.sh@14 -- # tr -d '"' 00:05:27.068 10:45:43 version -- app/version.sh@17 -- # major=24 00:05:27.068 10:45:43 version -- app/version.sh@18 -- # get_header_version minor 00:05:27.068 10:45:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.068 10:45:43 version -- app/version.sh@14 -- # cut -f2 00:05:27.068 10:45:43 version -- app/version.sh@14 -- # tr -d '"' 00:05:27.068 10:45:43 version -- app/version.sh@18 -- # minor=5 00:05:27.068 10:45:43 version -- app/version.sh@19 -- # get_header_version patch 00:05:27.068 10:45:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.068 10:45:43 version -- app/version.sh@14 -- # cut -f2 00:05:27.068 10:45:43 version -- app/version.sh@14 -- # tr -d '"' 00:05:27.068 10:45:43 version -- app/version.sh@19 -- # patch=0 00:05:27.068 10:45:43 version -- app/version.sh@20 -- # get_header_version suffix 00:05:27.068 10:45:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:27.068 10:45:43 version -- app/version.sh@14 -- # cut -f2 00:05:27.068 10:45:43 version -- app/version.sh@14 -- # tr -d '"' 00:05:27.068 10:45:43 version -- app/version.sh@20 -- # suffix=-pre 00:05:27.068 10:45:43 version -- app/version.sh@22 -- # version=24.5 00:05:27.068 10:45:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:27.068 10:45:43 version -- app/version.sh@28 -- # version=24.5rc0 00:05:27.068 10:45:43 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:27.068 10:45:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:27.068 10:45:43 version -- app/version.sh@30 -- # py_version=24.5rc0 00:05:27.068 10:45:43 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:05:27.068 00:05:27.068 real 0m0.110s 00:05:27.068 user 0m0.056s 00:05:27.068 sys 0m0.076s 00:05:27.068 10:45:43 version -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:27.068 10:45:43 version -- common/autotest_common.sh@10 -- # set +x 00:05:27.068 ************************************ 00:05:27.068 END TEST version 00:05:27.068 ************************************ 00:05:27.068 10:45:43 -- spdk/autotest.sh@197 -- # '[' 0 -eq 1 ']' 00:05:27.068 10:45:43 -- spdk/autotest.sh@207 -- # uname -s 00:05:27.068 10:45:43 -- spdk/autotest.sh@207 -- # [[ Linux == Linux ]] 00:05:27.068 10:45:43 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:05:27.068 10:45:43 -- spdk/autotest.sh@208 -- # [[ 0 -eq 1 ]] 00:05:27.068 10:45:43 -- spdk/autotest.sh@220 -- # '[' 0 -eq 1 ']' 00:05:27.068 10:45:43 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:05:27.068 10:45:43 -- spdk/autotest.sh@269 -- # timing_exit lib 00:05:27.068 10:45:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:27.068 10:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:27.068 10:45:43 -- spdk/autotest.sh@271 -- # '[' 0 -eq 1 ']' 00:05:27.068 10:45:43 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:05:27.068 10:45:43 -- spdk/autotest.sh@288 -- # '[' 1 -eq 1 ']' 00:05:27.068 10:45:43 -- spdk/autotest.sh@289 -- # export NET_TYPE 00:05:27.068 10:45:43 -- spdk/autotest.sh@292 -- # '[' tcp = rdma ']' 00:05:27.068 10:45:43 -- spdk/autotest.sh@295 -- # '[' tcp = tcp ']' 00:05:27.068 10:45:43 -- spdk/autotest.sh@296 -- # run_test_wrapper nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:27.068 10:45:43 -- spdk/autotest.sh@10 -- # local test_name=nvmf_tcp 00:05:27.068 10:45:43 -- spdk/autotest.sh@11 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:27.068 10:45:43 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:27.069 10:45:43 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.069 10:45:43 -- common/autotest_common.sh@10 -- # set +x 00:05:27.069 ************************************ 00:05:27.069 START TEST nvmf_tcp 00:05:27.069 ************************************ 00:05:27.069 10:45:43 nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:27.327 * Looking for test storage... 00:05:27.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.327 10:45:43 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.327 10:45:43 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.327 10:45:43 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.327 10:45:43 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.327 10:45:43 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.327 10:45:43 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.327 10:45:43 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:05:27.327 10:45:43 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:05:27.327 10:45:43 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.327 10:45:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:05:27.327 10:45:43 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:27.327 10:45:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:27.328 10:45:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:27.328 10:45:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:27.328 ************************************ 00:05:27.328 START TEST nvmf_example 00:05:27.328 ************************************ 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:05:27.328 * Looking for test storage... 00:05:27.328 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:05:27.328 10:45:43 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:29.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:29.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:29.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:29.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:29.861 10:45:45 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:29.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:29.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:05:29.861 00:05:29.861 --- 10.0.0.2 ping statistics --- 00:05:29.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.861 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:29.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:29.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:05:29.861 00:05:29.861 --- 10.0.0.1 ping statistics --- 00:05:29.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:29.861 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2686797 00:05:29.861 10:45:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:05:29.862 10:45:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:05:29.862 10:45:46 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2686797 00:05:29.862 10:45:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@827 -- # '[' -z 2686797 ']' 00:05:29.862 10:45:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.862 10:45:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:29.862 10:45:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.862 10:45:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:29.862 10:45:46 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:30.120 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@860 -- # return 0 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:05:31.057 10:45:47 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:05:31.057 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.069 Initializing NVMe Controllers 00:05:41.069 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:41.069 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:41.069 Initialization complete. Launching workers. 00:05:41.069 ======================================================== 00:05:41.069 Latency(us) 00:05:41.069 Device Information : IOPS MiB/s Average min max 00:05:41.069 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14486.89 56.59 4417.89 880.21 15466.71 00:05:41.069 ======================================================== 00:05:41.069 Total : 14486.89 56.59 4417.89 880.21 15466.71 00:05:41.069 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:41.328 rmmod nvme_tcp 00:05:41.328 rmmod nvme_fabrics 00:05:41.328 rmmod nvme_keyring 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2686797 ']' 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2686797 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@946 -- # '[' -z 2686797 ']' 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@950 -- # kill -0 2686797 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # uname 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2686797 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # process_name=nvmf 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@956 -- # '[' nvmf = sudo ']' 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2686797' 00:05:41.328 killing process with pid 2686797 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # kill 2686797 00:05:41.328 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@970 -- # wait 2686797 00:05:41.588 nvmf threads initialize successfully 00:05:41.588 bdev subsystem init successfully 00:05:41.588 created a nvmf target service 00:05:41.588 create targets's poll groups done 00:05:41.588 all subsystems of target started 00:05:41.588 nvmf target is running 00:05:41.588 all subsystems of target stopped 00:05:41.588 destroy targets's poll groups done 00:05:41.588 destroyed the nvmf target service 00:05:41.588 bdev subsystem finish successfully 00:05:41.588 nvmf threads destroy successfully 00:05:41.588 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:41.588 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:41.588 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:41.588 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:41.588 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:41.588 10:45:57 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:41.588 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:41.588 10:45:57 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.488 10:45:59 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:43.488 10:45:59 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:05:43.488 10:45:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.488 10:45:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:43.488 00:05:43.488 real 0m16.356s 00:05:43.488 user 0m45.220s 00:05:43.488 sys 0m3.558s 00:05:43.488 10:45:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:43.488 10:45:59 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:05:43.488 ************************************ 00:05:43.488 END TEST nvmf_example 00:05:43.488 ************************************ 00:05:43.750 10:45:59 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:43.750 10:45:59 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:43.750 10:45:59 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:43.750 10:45:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.750 ************************************ 00:05:43.750 START TEST nvmf_filesystem 00:05:43.750 ************************************ 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:05:43.750 * Looking for test storage... 00:05:43.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:05:43.750 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:05:43.751 #define SPDK_CONFIG_H 00:05:43.751 #define SPDK_CONFIG_APPS 1 00:05:43.751 #define SPDK_CONFIG_ARCH native 00:05:43.751 #undef SPDK_CONFIG_ASAN 00:05:43.751 #undef SPDK_CONFIG_AVAHI 00:05:43.751 #undef SPDK_CONFIG_CET 00:05:43.751 #define SPDK_CONFIG_COVERAGE 1 00:05:43.751 #define SPDK_CONFIG_CROSS_PREFIX 00:05:43.751 #undef SPDK_CONFIG_CRYPTO 00:05:43.751 #undef SPDK_CONFIG_CRYPTO_MLX5 00:05:43.751 #undef SPDK_CONFIG_CUSTOMOCF 00:05:43.751 #undef SPDK_CONFIG_DAOS 00:05:43.751 #define SPDK_CONFIG_DAOS_DIR 00:05:43.751 #define SPDK_CONFIG_DEBUG 1 00:05:43.751 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:05:43.751 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:43.751 #define SPDK_CONFIG_DPDK_INC_DIR 00:05:43.751 #define SPDK_CONFIG_DPDK_LIB_DIR 00:05:43.751 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:05:43.751 #undef SPDK_CONFIG_DPDK_UADK 00:05:43.751 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:43.751 #define SPDK_CONFIG_EXAMPLES 1 00:05:43.751 #undef SPDK_CONFIG_FC 00:05:43.751 #define SPDK_CONFIG_FC_PATH 00:05:43.751 #define SPDK_CONFIG_FIO_PLUGIN 1 00:05:43.751 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:05:43.751 #undef SPDK_CONFIG_FUSE 00:05:43.751 #undef SPDK_CONFIG_FUZZER 00:05:43.751 #define SPDK_CONFIG_FUZZER_LIB 00:05:43.751 #undef SPDK_CONFIG_GOLANG 00:05:43.751 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:05:43.751 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:05:43.751 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:05:43.751 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:05:43.751 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:05:43.751 #undef SPDK_CONFIG_HAVE_LIBBSD 00:05:43.751 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:05:43.751 #define SPDK_CONFIG_IDXD 1 00:05:43.751 #undef SPDK_CONFIG_IDXD_KERNEL 00:05:43.751 #undef SPDK_CONFIG_IPSEC_MB 00:05:43.751 #define SPDK_CONFIG_IPSEC_MB_DIR 00:05:43.751 #define SPDK_CONFIG_ISAL 1 00:05:43.751 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:05:43.751 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:05:43.751 #define SPDK_CONFIG_LIBDIR 00:05:43.751 #undef SPDK_CONFIG_LTO 00:05:43.751 #define SPDK_CONFIG_MAX_LCORES 00:05:43.751 #define SPDK_CONFIG_NVME_CUSE 1 00:05:43.751 #undef SPDK_CONFIG_OCF 00:05:43.751 #define SPDK_CONFIG_OCF_PATH 00:05:43.751 #define SPDK_CONFIG_OPENSSL_PATH 00:05:43.751 #undef SPDK_CONFIG_PGO_CAPTURE 00:05:43.751 #define SPDK_CONFIG_PGO_DIR 00:05:43.751 #undef SPDK_CONFIG_PGO_USE 00:05:43.751 #define SPDK_CONFIG_PREFIX /usr/local 00:05:43.751 #undef SPDK_CONFIG_RAID5F 00:05:43.751 #undef SPDK_CONFIG_RBD 00:05:43.751 #define SPDK_CONFIG_RDMA 1 00:05:43.751 #define SPDK_CONFIG_RDMA_PROV verbs 00:05:43.751 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:05:43.751 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:05:43.751 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:05:43.751 #define SPDK_CONFIG_SHARED 1 00:05:43.751 #undef SPDK_CONFIG_SMA 00:05:43.751 #define SPDK_CONFIG_TESTS 1 00:05:43.751 #undef SPDK_CONFIG_TSAN 00:05:43.751 #define SPDK_CONFIG_UBLK 1 00:05:43.751 #define SPDK_CONFIG_UBSAN 1 00:05:43.751 #undef SPDK_CONFIG_UNIT_TESTS 00:05:43.751 #undef SPDK_CONFIG_URING 00:05:43.751 #define SPDK_CONFIG_URING_PATH 00:05:43.751 #undef SPDK_CONFIG_URING_ZNS 00:05:43.751 #undef SPDK_CONFIG_USDT 00:05:43.751 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:05:43.751 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:05:43.751 #define SPDK_CONFIG_VFIO_USER 1 00:05:43.751 #define SPDK_CONFIG_VFIO_USER_DIR 00:05:43.751 #define SPDK_CONFIG_VHOST 1 00:05:43.751 #define SPDK_CONFIG_VIRTIO 1 00:05:43.751 #undef SPDK_CONFIG_VTUNE 00:05:43.751 #define SPDK_CONFIG_VTUNE_DIR 00:05:43.751 #define SPDK_CONFIG_WERROR 1 00:05:43.751 #define SPDK_CONFIG_WPDK_DIR 00:05:43.751 #undef SPDK_CONFIG_XNVME 00:05:43.751 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.751 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@57 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@61 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # : 1 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # : 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # : 1 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # : 1 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # : 1 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # : tcp 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # : 1 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # : 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # : 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # : true 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # : 0 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:05:43.752 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # : e810 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # : 0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # cat 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@235 -- # echo leak:libfuse3.so 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@237 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@239 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@241 -- # '[' -z /var/spdk/dependencies ']' 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEPENDENCY_DIR 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@248 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@252 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@255 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@258 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@261 -- # '[' 0 -eq 0 ']' 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # export valgrind= 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # valgrind= 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # uname -s 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@268 -- # '[' Linux = Linux ']' 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # HUGEMEM=4096 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # export CLEAR_HUGE=yes 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # CLEAR_HUGE=yes 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # [[ 0 -eq 1 ]] 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@278 -- # MAKE=make 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKEFLAGS=-j48 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # export HUGEMEM=4096 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@295 -- # HUGEMEM=4096 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@297 -- # NO_HUGE=() 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # TEST_MODE= 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # for i in "$@" 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # case "$i" in 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@305 -- # TEST_TRANSPORT=tcp 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # [[ -z 2688523 ]] 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@317 -- # kill -0 2688523 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # [[ -v testdir ]] 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@329 -- # local requested_size=2147483648 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local mount target_dir 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@332 -- # local -A mounts fss sizes avails uses 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local source fs size avail mount use 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@335 -- # local storage_fallback storage_candidates 00:05:43.753 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # mktemp -udt spdk.XXXXXX 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@337 -- # storage_fallback=/tmp/spdk.3DtziS 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@342 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@344 -- # [[ -n '' ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@349 -- # [[ -n '' ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@354 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3DtziS/tests/target /tmp/spdk.3DtziS 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@357 -- # requested_size=2214592512 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # df -T 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@326 -- # grep -v Filesystem 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_devtmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=devtmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=67108864 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=67108864 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=0 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=/dev/pmem0 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=ext2 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=973135872 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=5284429824 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4311293952 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=spdk_root 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=overlay 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=48456118272 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=61994729472 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=13538611200 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30941728768 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=55635968 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=12389982208 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=12398948352 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=8966144 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=30995521536 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=30997364736 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=1843200 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # mounts["$mount"]=tmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # fss["$mount"]=tmpfs 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # avails["$mount"]=6199468032 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # sizes["$mount"]=6199472128 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # uses["$mount"]=4096 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@359 -- # read -r source fs size use avail _ mount 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@365 -- # printf '* Looking for test storage...\n' 00:05:43.754 * Looking for test storage... 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@367 -- # local target_space new_size 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # for target_dir in "${storage_candidates[@]}" 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # awk '$1 !~ /Filesystem/{print $6}' 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@371 -- # mount=/ 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@373 -- # target_space=48456118272 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # (( target_space == 0 || target_space < requested_size )) 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space >= requested_size )) 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == tmpfs ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ overlay == ramfs ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@379 -- # [[ / == / ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # new_size=15753203712 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # (( new_size * 100 / sizes[/] > 95 )) 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@386 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # return 0 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.754 10:45:59 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:05:43.755 10:45:59 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:46.284 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:46.284 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:46.284 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:46.284 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:46.284 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:46.284 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:05:46.284 00:05:46.284 --- 10.0.0.2 ping statistics --- 00:05:46.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.284 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:46.284 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:46.284 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.256 ms 00:05:46.284 00:05:46.284 --- 10.0.0.1 ping statistics --- 00:05:46.284 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:46.284 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:46.284 10:46:02 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:46.285 10:46:02 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:05:46.285 10:46:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:05:46.285 10:46:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:46.285 10:46:02 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:05:46.545 ************************************ 00:05:46.545 START TEST nvmf_filesystem_no_in_capsule 00:05:46.545 ************************************ 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 0 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2690555 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2690555 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2690555 ']' 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:05:46.545 10:46:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:46.545 [2024-05-15 10:46:02.584338] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:05:46.545 [2024-05-15 10:46:02.584421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:46.545 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.545 [2024-05-15 10:46:02.667182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.803 [2024-05-15 10:46:02.791241] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:46.803 [2024-05-15 10:46:02.791303] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:46.803 [2024-05-15 10:46:02.791324] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:46.803 [2024-05-15 10:46:02.791338] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:46.803 [2024-05-15 10:46:02.791350] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:46.803 [2024-05-15 10:46:02.791431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.803 [2024-05-15 10:46:02.791485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.803 [2024-05-15 10:46:02.791538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.803 [2024-05-15 10:46:02.791541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.367 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.367 [2024-05-15 10:46:03.593042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 Malloc1 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 [2024-05-15 10:46:03.787338] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:05:47.624 [2024-05-15 10:46:03.787671] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:05:47.624 { 00:05:47.624 "name": "Malloc1", 00:05:47.624 "aliases": [ 00:05:47.624 "bfc363d3-db81-4dfd-a92b-4bd67dd3ce91" 00:05:47.624 ], 00:05:47.624 "product_name": "Malloc disk", 00:05:47.624 "block_size": 512, 00:05:47.624 "num_blocks": 1048576, 00:05:47.624 "uuid": "bfc363d3-db81-4dfd-a92b-4bd67dd3ce91", 00:05:47.624 "assigned_rate_limits": { 00:05:47.624 "rw_ios_per_sec": 0, 00:05:47.624 "rw_mbytes_per_sec": 0, 00:05:47.624 "r_mbytes_per_sec": 0, 00:05:47.624 "w_mbytes_per_sec": 0 00:05:47.624 }, 00:05:47.624 "claimed": true, 00:05:47.624 "claim_type": "exclusive_write", 00:05:47.624 "zoned": false, 00:05:47.624 "supported_io_types": { 00:05:47.624 "read": true, 00:05:47.624 "write": true, 00:05:47.624 "unmap": true, 00:05:47.624 "write_zeroes": true, 00:05:47.624 "flush": true, 00:05:47.624 "reset": true, 00:05:47.624 "compare": false, 00:05:47.624 "compare_and_write": false, 00:05:47.624 "abort": true, 00:05:47.624 "nvme_admin": false, 00:05:47.624 "nvme_io": false 00:05:47.624 }, 00:05:47.624 "memory_domains": [ 00:05:47.624 { 00:05:47.624 "dma_device_id": "system", 00:05:47.624 "dma_device_type": 1 00:05:47.624 }, 00:05:47.624 { 00:05:47.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.624 "dma_device_type": 2 00:05:47.624 } 00:05:47.624 ], 00:05:47.624 "driver_specific": {} 00:05:47.624 } 00:05:47.624 ]' 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:05:47.624 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:05:47.882 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:05:47.882 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:05:47.882 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:05:47.882 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:05:47.882 10:46:03 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:05:48.467 10:46:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:05:48.467 10:46:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:05:48.467 10:46:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:05:48.467 10:46:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:05:48.467 10:46:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:05:50.367 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:05:50.625 10:46:06 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:05:51.556 10:46:07 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:52.488 ************************************ 00:05:52.488 START TEST filesystem_ext4 00:05:52.488 ************************************ 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local force 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:05:52.488 10:46:08 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:05:52.488 mke2fs 1.46.5 (30-Dec-2021) 00:05:52.746 Discarding device blocks: 0/522240 done 00:05:52.746 Creating filesystem with 522240 1k blocks and 130560 inodes 00:05:52.746 Filesystem UUID: 215519ea-a739-4abb-bc72-95c726e67ef0 00:05:52.746 Superblock backups stored on blocks: 00:05:52.746 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:05:52.746 00:05:52.746 Allocating group tables: 0/64 done 00:05:52.746 Writing inode tables: 0/64 done 00:05:53.937 Creating journal (8192 blocks): done 00:05:53.937 Writing superblocks and filesystem accounting information: 0/64 done 00:05:53.937 00:05:53.937 10:46:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # return 0 00:05:53.937 10:46:09 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:54.883 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:54.883 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:05:54.883 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:54.883 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:05:54.883 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2690555 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:54.884 00:05:54.884 real 0m2.211s 00:05:54.884 user 0m0.014s 00:05:54.884 sys 0m0.041s 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:05:54.884 ************************************ 00:05:54.884 END TEST filesystem_ext4 00:05:54.884 ************************************ 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:54.884 ************************************ 00:05:54.884 START TEST filesystem_btrfs 00:05:54.884 ************************************ 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local force 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:05:54.884 10:46:10 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:05:55.203 btrfs-progs v6.6.2 00:05:55.203 See https://btrfs.readthedocs.io for more information. 00:05:55.203 00:05:55.203 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:05:55.203 NOTE: several default settings have changed in version 5.15, please make sure 00:05:55.203 this does not affect your deployments: 00:05:55.203 - DUP for metadata (-m dup) 00:05:55.203 - enabled no-holes (-O no-holes) 00:05:55.203 - enabled free-space-tree (-R free-space-tree) 00:05:55.203 00:05:55.203 Label: (null) 00:05:55.203 UUID: b9007870-6b34-4c94-a534-762acff39e24 00:05:55.203 Node size: 16384 00:05:55.203 Sector size: 4096 00:05:55.203 Filesystem size: 510.00MiB 00:05:55.203 Block group profiles: 00:05:55.203 Data: single 8.00MiB 00:05:55.203 Metadata: DUP 32.00MiB 00:05:55.203 System: DUP 8.00MiB 00:05:55.203 SSD detected: yes 00:05:55.203 Zoned device: no 00:05:55.203 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:05:55.203 Runtime features: free-space-tree 00:05:55.203 Checksum: crc32c 00:05:55.203 Number of devices: 1 00:05:55.203 Devices: 00:05:55.203 ID SIZE PATH 00:05:55.203 1 510.00MiB /dev/nvme0n1p1 00:05:55.203 00:05:55.203 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # return 0 00:05:55.203 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2690555 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:55.462 00:05:55.462 real 0m0.682s 00:05:55.462 user 0m0.015s 00:05:55.462 sys 0m0.041s 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:05:55.462 ************************************ 00:05:55.462 END TEST filesystem_btrfs 00:05:55.462 ************************************ 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:55.462 ************************************ 00:05:55.462 START TEST filesystem_xfs 00:05:55.462 ************************************ 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local i=0 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local force 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # force=-f 00:05:55.462 10:46:11 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:05:55.720 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:05:55.720 = sectsz=512 attr=2, projid32bit=1 00:05:55.720 = crc=1 finobt=1, sparse=1, rmapbt=0 00:05:55.720 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:05:55.720 data = bsize=4096 blocks=130560, imaxpct=25 00:05:55.720 = sunit=0 swidth=0 blks 00:05:55.720 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:05:55.720 log =internal log bsize=4096 blocks=16384, version=2 00:05:55.720 = sectsz=512 sunit=0 blks, lazy-count=1 00:05:55.720 realtime =none extsz=4096 blocks=0, rtextents=0 00:05:56.653 Discarding blocks...Done. 00:05:56.653 10:46:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # return 0 00:05:56.653 10:46:12 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2690555 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:05:59.182 00:05:59.182 real 0m3.729s 00:05:59.182 user 0m0.018s 00:05:59.182 sys 0m0.040s 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:05:59.182 ************************************ 00:05:59.182 END TEST filesystem_xfs 00:05:59.182 ************************************ 00:05:59.182 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:05:59.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2690555 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2690555 ']' 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2690555 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # uname 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:05:59.458 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2690555 00:05:59.459 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:05:59.459 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:05:59.459 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2690555' 00:05:59.459 killing process with pid 2690555 00:05:59.459 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # kill 2690555 00:05:59.459 [2024-05-15 10:46:15.617406] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:05:59.459 10:46:15 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # wait 2690555 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:00.027 00:06:00.027 real 0m13.558s 00:06:00.027 user 0m52.123s 00:06:00.027 sys 0m1.826s 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:00.027 ************************************ 00:06:00.027 END TEST nvmf_filesystem_no_in_capsule 00:06:00.027 ************************************ 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:00.027 ************************************ 00:06:00.027 START TEST nvmf_filesystem_in_capsule 00:06:00.027 ************************************ 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1121 -- # nvmf_filesystem_part 4096 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2692378 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2692378 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@827 -- # '[' -z 2692378 ']' 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:00.027 10:46:16 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:00.027 [2024-05-15 10:46:16.204511] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:06:00.027 [2024-05-15 10:46:16.204607] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:00.027 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.285 [2024-05-15 10:46:16.286452] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.285 [2024-05-15 10:46:16.405380] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:00.285 [2024-05-15 10:46:16.405448] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:00.285 [2024-05-15 10:46:16.405464] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.285 [2024-05-15 10:46:16.405477] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.285 [2024-05-15 10:46:16.405488] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:00.285 [2024-05-15 10:46:16.405587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.285 [2024-05-15 10:46:16.405665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.285 [2024-05-15 10:46:16.405754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.285 [2024-05-15 10:46:16.405757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # return 0 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.219 [2024-05-15 10:46:17.209956] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.219 Malloc1 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.219 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.220 [2024-05-15 10:46:17.382247] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:01.220 [2024-05-15 10:46:17.382557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1374 -- # local bdev_name=Malloc1 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_info 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bs 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local nb 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # bdev_info='[ 00:06:01.220 { 00:06:01.220 "name": "Malloc1", 00:06:01.220 "aliases": [ 00:06:01.220 "bd5d12ae-99ab-40e6-9bae-28dc229b4087" 00:06:01.220 ], 00:06:01.220 "product_name": "Malloc disk", 00:06:01.220 "block_size": 512, 00:06:01.220 "num_blocks": 1048576, 00:06:01.220 "uuid": "bd5d12ae-99ab-40e6-9bae-28dc229b4087", 00:06:01.220 "assigned_rate_limits": { 00:06:01.220 "rw_ios_per_sec": 0, 00:06:01.220 "rw_mbytes_per_sec": 0, 00:06:01.220 "r_mbytes_per_sec": 0, 00:06:01.220 "w_mbytes_per_sec": 0 00:06:01.220 }, 00:06:01.220 "claimed": true, 00:06:01.220 "claim_type": "exclusive_write", 00:06:01.220 "zoned": false, 00:06:01.220 "supported_io_types": { 00:06:01.220 "read": true, 00:06:01.220 "write": true, 00:06:01.220 "unmap": true, 00:06:01.220 "write_zeroes": true, 00:06:01.220 "flush": true, 00:06:01.220 "reset": true, 00:06:01.220 "compare": false, 00:06:01.220 "compare_and_write": false, 00:06:01.220 "abort": true, 00:06:01.220 "nvme_admin": false, 00:06:01.220 "nvme_io": false 00:06:01.220 }, 00:06:01.220 "memory_domains": [ 00:06:01.220 { 00:06:01.220 "dma_device_id": "system", 00:06:01.220 "dma_device_type": 1 00:06:01.220 }, 00:06:01.220 { 00:06:01.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.220 "dma_device_type": 2 00:06:01.220 } 00:06:01.220 ], 00:06:01.220 "driver_specific": {} 00:06:01.220 } 00:06:01.220 ]' 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # jq '.[] .block_size' 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bs=512 00:06:01.220 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .num_blocks' 00:06:01.478 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # nb=1048576 00:06:01.478 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bdev_size=512 00:06:01.478 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # echo 512 00:06:01.478 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:01.478 10:46:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:02.044 10:46:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:02.044 10:46:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1194 -- # local i=0 00:06:02.044 10:46:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:02.044 10:46:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:02.044 10:46:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # sleep 2 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # return 0 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:03.944 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:04.201 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:06:04.766 10:46:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:06.139 ************************************ 00:06:06.139 START TEST filesystem_in_capsule_ext4 00:06:06.139 ************************************ 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@922 -- # local fstype=ext4 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:06.139 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local i=0 00:06:06.140 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local force 00:06:06.140 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # '[' ext4 = ext4 ']' 00:06:06.140 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # force=-F 00:06:06.140 10:46:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:06.140 mke2fs 1.46.5 (30-Dec-2021) 00:06:06.140 Discarding device blocks: 0/522240 done 00:06:06.140 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:06.140 Filesystem UUID: 0799a874-774c-429f-87aa-fe531a9af2d0 00:06:06.140 Superblock backups stored on blocks: 00:06:06.140 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:06.140 00:06:06.140 Allocating group tables: 0/64 done 00:06:06.140 Writing inode tables: 0/64 done 00:06:06.704 Creating journal (8192 blocks): done 00:06:07.526 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:06:07.526 00:06:07.526 10:46:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # return 0 00:06:07.526 10:46:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2692378 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:08.461 00:06:08.461 real 0m2.467s 00:06:08.461 user 0m0.016s 00:06:08.461 sys 0m0.030s 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:06:08.461 ************************************ 00:06:08.461 END TEST filesystem_in_capsule_ext4 00:06:08.461 ************************************ 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:08.461 ************************************ 00:06:08.461 START TEST filesystem_in_capsule_btrfs 00:06:08.461 ************************************ 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@922 -- # local fstype=btrfs 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local i=0 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local force 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # '[' btrfs = ext4 ']' 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # force=-f 00:06:08.461 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:08.719 btrfs-progs v6.6.2 00:06:08.719 See https://btrfs.readthedocs.io for more information. 00:06:08.719 00:06:08.719 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:08.719 NOTE: several default settings have changed in version 5.15, please make sure 00:06:08.719 this does not affect your deployments: 00:06:08.719 - DUP for metadata (-m dup) 00:06:08.719 - enabled no-holes (-O no-holes) 00:06:08.719 - enabled free-space-tree (-R free-space-tree) 00:06:08.719 00:06:08.719 Label: (null) 00:06:08.719 UUID: 6fd1785c-741d-4177-b908-cbb729ba23ec 00:06:08.719 Node size: 16384 00:06:08.719 Sector size: 4096 00:06:08.719 Filesystem size: 510.00MiB 00:06:08.719 Block group profiles: 00:06:08.719 Data: single 8.00MiB 00:06:08.719 Metadata: DUP 32.00MiB 00:06:08.719 System: DUP 8.00MiB 00:06:08.719 SSD detected: yes 00:06:08.719 Zoned device: no 00:06:08.719 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:08.719 Runtime features: free-space-tree 00:06:08.719 Checksum: crc32c 00:06:08.719 Number of devices: 1 00:06:08.719 Devices: 00:06:08.719 ID SIZE PATH 00:06:08.719 1 510.00MiB /dev/nvme0n1p1 00:06:08.719 00:06:08.719 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # return 0 00:06:08.719 10:46:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2692378 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:09.652 00:06:09.652 real 0m1.346s 00:06:09.652 user 0m0.008s 00:06:09.652 sys 0m0.051s 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:06:09.652 ************************************ 00:06:09.652 END TEST filesystem_in_capsule_btrfs 00:06:09.652 ************************************ 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:09.652 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:09.911 ************************************ 00:06:09.911 START TEST filesystem_in_capsule_xfs 00:06:09.911 ************************************ 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1121 -- # nvmf_filesystem_create xfs nvme0n1 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@922 -- # local fstype=xfs 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local dev_name=/dev/nvme0n1p1 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local i=0 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local force 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # '[' xfs = ext4 ']' 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # force=-f 00:06:09.911 10:46:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:09.911 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:09.911 = sectsz=512 attr=2, projid32bit=1 00:06:09.911 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:09.911 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:09.911 data = bsize=4096 blocks=130560, imaxpct=25 00:06:09.911 = sunit=0 swidth=0 blks 00:06:09.911 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:09.911 log =internal log bsize=4096 blocks=16384, version=2 00:06:09.911 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:09.912 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:10.879 Discarding blocks...Done. 00:06:10.879 10:46:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # return 0 00:06:10.879 10:46:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2692378 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:13.407 00:06:13.407 real 0m3.331s 00:06:13.407 user 0m0.020s 00:06:13.407 sys 0m0.038s 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:06:13.407 ************************************ 00:06:13.407 END TEST filesystem_in_capsule_xfs 00:06:13.407 ************************************ 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:13.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1215 -- # local i=0 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # return 0 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2692378 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@946 -- # '[' -z 2692378 ']' 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # kill -0 2692378 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # uname 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2692378 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2692378' 00:06:13.407 killing process with pid 2692378 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # kill 2692378 00:06:13.407 [2024-05-15 10:46:29.591442] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:13.407 10:46:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # wait 2692378 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:06:13.975 00:06:13.975 real 0m13.941s 00:06:13.975 user 0m53.594s 00:06:13.975 sys 0m1.834s 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:06:13.975 ************************************ 00:06:13.975 END TEST nvmf_filesystem_in_capsule 00:06:13.975 ************************************ 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:13.975 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:13.975 rmmod nvme_tcp 00:06:13.975 rmmod nvme_fabrics 00:06:13.975 rmmod nvme_keyring 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:13.976 10:46:30 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.513 10:46:32 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:16.513 00:06:16.513 real 0m32.461s 00:06:16.513 user 1m46.745s 00:06:16.513 sys 0m5.602s 00:06:16.513 10:46:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:16.513 10:46:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:06:16.513 ************************************ 00:06:16.513 END TEST nvmf_filesystem 00:06:16.513 ************************************ 00:06:16.513 10:46:32 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:16.513 10:46:32 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:16.513 10:46:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:16.513 10:46:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:16.513 ************************************ 00:06:16.513 START TEST nvmf_target_discovery 00:06:16.513 ************************************ 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:16.513 * Looking for test storage... 00:06:16.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:06:16.513 10:46:32 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:19.048 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:19.049 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:19.049 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:19.049 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:19.049 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:19.049 10:46:34 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:19.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:19.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:06:19.049 00:06:19.049 --- 10.0.0.2 ping statistics --- 00:06:19.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.049 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:19.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:19.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:06:19.049 00:06:19.049 --- 10.0.0.1 ping statistics --- 00:06:19.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:19.049 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2696426 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2696426 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@827 -- # '[' -z 2696426 ']' 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:19.049 10:46:35 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:19.049 [2024-05-15 10:46:35.091605] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:06:19.049 [2024-05-15 10:46:35.091704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:19.049 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.049 [2024-05-15 10:46:35.174437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.308 [2024-05-15 10:46:35.298569] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:19.308 [2024-05-15 10:46:35.298625] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:19.308 [2024-05-15 10:46:35.298641] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:19.308 [2024-05-15 10:46:35.298655] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:19.308 [2024-05-15 10:46:35.298673] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:19.308 [2024-05-15 10:46:35.298774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.308 [2024-05-15 10:46:35.298825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.308 [2024-05-15 10:46:35.298878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.308 [2024-05-15 10:46:35.298881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.873 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:19.873 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@860 -- # return 0 00:06:19.873 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:19.873 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:19.873 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 [2024-05-15 10:46:36.125116] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 Null1 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 [2024-05-15 10:46:36.165141] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:20.131 [2024-05-15 10:46:36.165431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 Null2 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.131 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 Null3 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 Null4 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.132 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:20.390 00:06:20.390 Discovery Log Number of Records 6, Generation counter 6 00:06:20.390 =====Discovery Log Entry 0====== 00:06:20.390 trtype: tcp 00:06:20.390 adrfam: ipv4 00:06:20.390 subtype: current discovery subsystem 00:06:20.390 treq: not required 00:06:20.390 portid: 0 00:06:20.390 trsvcid: 4420 00:06:20.390 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:20.390 traddr: 10.0.0.2 00:06:20.390 eflags: explicit discovery connections, duplicate discovery information 00:06:20.390 sectype: none 00:06:20.390 =====Discovery Log Entry 1====== 00:06:20.390 trtype: tcp 00:06:20.390 adrfam: ipv4 00:06:20.390 subtype: nvme subsystem 00:06:20.390 treq: not required 00:06:20.390 portid: 0 00:06:20.390 trsvcid: 4420 00:06:20.390 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:20.390 traddr: 10.0.0.2 00:06:20.390 eflags: none 00:06:20.390 sectype: none 00:06:20.390 =====Discovery Log Entry 2====== 00:06:20.390 trtype: tcp 00:06:20.390 adrfam: ipv4 00:06:20.390 subtype: nvme subsystem 00:06:20.390 treq: not required 00:06:20.390 portid: 0 00:06:20.390 trsvcid: 4420 00:06:20.390 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:20.390 traddr: 10.0.0.2 00:06:20.390 eflags: none 00:06:20.390 sectype: none 00:06:20.390 =====Discovery Log Entry 3====== 00:06:20.390 trtype: tcp 00:06:20.390 adrfam: ipv4 00:06:20.390 subtype: nvme subsystem 00:06:20.390 treq: not required 00:06:20.390 portid: 0 00:06:20.390 trsvcid: 4420 00:06:20.390 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:20.390 traddr: 10.0.0.2 00:06:20.390 eflags: none 00:06:20.390 sectype: none 00:06:20.391 =====Discovery Log Entry 4====== 00:06:20.391 trtype: tcp 00:06:20.391 adrfam: ipv4 00:06:20.391 subtype: nvme subsystem 00:06:20.391 treq: not required 00:06:20.391 portid: 0 00:06:20.391 trsvcid: 4420 00:06:20.391 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:20.391 traddr: 10.0.0.2 00:06:20.391 eflags: none 00:06:20.391 sectype: none 00:06:20.391 =====Discovery Log Entry 5====== 00:06:20.391 trtype: tcp 00:06:20.391 adrfam: ipv4 00:06:20.391 subtype: discovery subsystem referral 00:06:20.391 treq: not required 00:06:20.391 portid: 0 00:06:20.391 trsvcid: 4430 00:06:20.391 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:20.391 traddr: 10.0.0.2 00:06:20.391 eflags: none 00:06:20.391 sectype: none 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:20.391 Perform nvmf subsystem discovery via RPC 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.391 [ 00:06:20.391 { 00:06:20.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:20.391 "subtype": "Discovery", 00:06:20.391 "listen_addresses": [ 00:06:20.391 { 00:06:20.391 "trtype": "TCP", 00:06:20.391 "adrfam": "IPv4", 00:06:20.391 "traddr": "10.0.0.2", 00:06:20.391 "trsvcid": "4420" 00:06:20.391 } 00:06:20.391 ], 00:06:20.391 "allow_any_host": true, 00:06:20.391 "hosts": [] 00:06:20.391 }, 00:06:20.391 { 00:06:20.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:20.391 "subtype": "NVMe", 00:06:20.391 "listen_addresses": [ 00:06:20.391 { 00:06:20.391 "trtype": "TCP", 00:06:20.391 "adrfam": "IPv4", 00:06:20.391 "traddr": "10.0.0.2", 00:06:20.391 "trsvcid": "4420" 00:06:20.391 } 00:06:20.391 ], 00:06:20.391 "allow_any_host": true, 00:06:20.391 "hosts": [], 00:06:20.391 "serial_number": "SPDK00000000000001", 00:06:20.391 "model_number": "SPDK bdev Controller", 00:06:20.391 "max_namespaces": 32, 00:06:20.391 "min_cntlid": 1, 00:06:20.391 "max_cntlid": 65519, 00:06:20.391 "namespaces": [ 00:06:20.391 { 00:06:20.391 "nsid": 1, 00:06:20.391 "bdev_name": "Null1", 00:06:20.391 "name": "Null1", 00:06:20.391 "nguid": "43A18B6802CE44638F350EAC144D39F8", 00:06:20.391 "uuid": "43a18b68-02ce-4463-8f35-0eac144d39f8" 00:06:20.391 } 00:06:20.391 ] 00:06:20.391 }, 00:06:20.391 { 00:06:20.391 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:20.391 "subtype": "NVMe", 00:06:20.391 "listen_addresses": [ 00:06:20.391 { 00:06:20.391 "trtype": "TCP", 00:06:20.391 "adrfam": "IPv4", 00:06:20.391 "traddr": "10.0.0.2", 00:06:20.391 "trsvcid": "4420" 00:06:20.391 } 00:06:20.391 ], 00:06:20.391 "allow_any_host": true, 00:06:20.391 "hosts": [], 00:06:20.391 "serial_number": "SPDK00000000000002", 00:06:20.391 "model_number": "SPDK bdev Controller", 00:06:20.391 "max_namespaces": 32, 00:06:20.391 "min_cntlid": 1, 00:06:20.391 "max_cntlid": 65519, 00:06:20.391 "namespaces": [ 00:06:20.391 { 00:06:20.391 "nsid": 1, 00:06:20.391 "bdev_name": "Null2", 00:06:20.391 "name": "Null2", 00:06:20.391 "nguid": "8058C25A30A34154A81A59885A72B353", 00:06:20.391 "uuid": "8058c25a-30a3-4154-a81a-59885a72b353" 00:06:20.391 } 00:06:20.391 ] 00:06:20.391 }, 00:06:20.391 { 00:06:20.391 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:20.391 "subtype": "NVMe", 00:06:20.391 "listen_addresses": [ 00:06:20.391 { 00:06:20.391 "trtype": "TCP", 00:06:20.391 "adrfam": "IPv4", 00:06:20.391 "traddr": "10.0.0.2", 00:06:20.391 "trsvcid": "4420" 00:06:20.391 } 00:06:20.391 ], 00:06:20.391 "allow_any_host": true, 00:06:20.391 "hosts": [], 00:06:20.391 "serial_number": "SPDK00000000000003", 00:06:20.391 "model_number": "SPDK bdev Controller", 00:06:20.391 "max_namespaces": 32, 00:06:20.391 "min_cntlid": 1, 00:06:20.391 "max_cntlid": 65519, 00:06:20.391 "namespaces": [ 00:06:20.391 { 00:06:20.391 "nsid": 1, 00:06:20.391 "bdev_name": "Null3", 00:06:20.391 "name": "Null3", 00:06:20.391 "nguid": "49A667B9C3BD4755A2EAE85FFD0FAEA6", 00:06:20.391 "uuid": "49a667b9-c3bd-4755-a2ea-e85ffd0faea6" 00:06:20.391 } 00:06:20.391 ] 00:06:20.391 }, 00:06:20.391 { 00:06:20.391 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:20.391 "subtype": "NVMe", 00:06:20.391 "listen_addresses": [ 00:06:20.391 { 00:06:20.391 "trtype": "TCP", 00:06:20.391 "adrfam": "IPv4", 00:06:20.391 "traddr": "10.0.0.2", 00:06:20.391 "trsvcid": "4420" 00:06:20.391 } 00:06:20.391 ], 00:06:20.391 "allow_any_host": true, 00:06:20.391 "hosts": [], 00:06:20.391 "serial_number": "SPDK00000000000004", 00:06:20.391 "model_number": "SPDK bdev Controller", 00:06:20.391 "max_namespaces": 32, 00:06:20.391 "min_cntlid": 1, 00:06:20.391 "max_cntlid": 65519, 00:06:20.391 "namespaces": [ 00:06:20.391 { 00:06:20.391 "nsid": 1, 00:06:20.391 "bdev_name": "Null4", 00:06:20.391 "name": "Null4", 00:06:20.391 "nguid": "E1B3ABD84BE84C558ACD55BC28585C45", 00:06:20.391 "uuid": "e1b3abd8-4be8-4c55-8acd-55bc28585c45" 00:06:20.391 } 00:06:20.391 ] 00:06:20.391 } 00:06:20.391 ] 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:20.391 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:20.392 rmmod nvme_tcp 00:06:20.392 rmmod nvme_fabrics 00:06:20.392 rmmod nvme_keyring 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2696426 ']' 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2696426 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@946 -- # '[' -z 2696426 ']' 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@950 -- # kill -0 2696426 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # uname 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:20.392 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2696426 00:06:20.650 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:20.650 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:20.650 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2696426' 00:06:20.650 killing process with pid 2696426 00:06:20.650 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # kill 2696426 00:06:20.650 [2024-05-15 10:46:36.629887] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:20.650 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@970 -- # wait 2696426 00:06:20.981 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:20.981 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:20.981 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:20.981 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:20.981 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:20.981 10:46:36 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:20.981 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:20.981 10:46:36 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.884 10:46:38 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:22.884 00:06:22.884 real 0m6.672s 00:06:22.884 user 0m7.450s 00:06:22.884 sys 0m2.247s 00:06:22.884 10:46:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:22.884 10:46:38 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:06:22.884 ************************************ 00:06:22.884 END TEST nvmf_target_discovery 00:06:22.884 ************************************ 00:06:22.884 10:46:38 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:22.884 10:46:38 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:22.884 10:46:38 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:22.884 10:46:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:22.884 ************************************ 00:06:22.884 START TEST nvmf_referrals 00:06:22.884 ************************************ 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:06:22.884 * Looking for test storage... 00:06:22.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:22.884 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:22.885 10:46:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:22.885 10:46:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:22.885 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:22.885 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:22.885 10:46:39 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:06:22.885 10:46:39 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:25.416 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:25.416 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:25.416 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:25.417 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:25.417 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:25.417 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:25.675 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:25.675 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:06:25.675 00:06:25.675 --- 10.0.0.2 ping statistics --- 00:06:25.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.675 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:25.675 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:25.675 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:06:25.675 00:06:25.675 --- 10.0.0.1 ping statistics --- 00:06:25.675 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:25.675 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2698946 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2698946 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@827 -- # '[' -z 2698946 ']' 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:25.675 10:46:41 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.675 [2024-05-15 10:46:41.773109] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:06:25.675 [2024-05-15 10:46:41.773199] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.675 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.675 [2024-05-15 10:46:41.863045] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.934 [2024-05-15 10:46:41.989519] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:25.934 [2024-05-15 10:46:41.989581] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:25.934 [2024-05-15 10:46:41.989598] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.934 [2024-05-15 10:46:41.989612] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.934 [2024-05-15 10:46:41.989623] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:25.934 [2024-05-15 10:46:41.989691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.934 [2024-05-15 10:46:41.989747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.934 [2024-05-15 10:46:41.989797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.934 [2024-05-15 10:46:41.989800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@860 -- # return 0 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.934 [2024-05-15 10:46:42.151955] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:25.934 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:25.934 [2024-05-15 10:46:42.163922] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:25.934 [2024-05-15 10:46:42.164248] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.192 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.193 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.451 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:06:26.451 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:06:26.451 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:26.451 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:26.451 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:26.451 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:26.452 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:26.710 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:26.968 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:06:26.968 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:06:26.968 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:06:26.968 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:06:26.968 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:06:26.968 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:26.968 10:46:42 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:26.968 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:27.225 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.225 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:06:27.225 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:27.226 rmmod nvme_tcp 00:06:27.226 rmmod nvme_fabrics 00:06:27.226 rmmod nvme_keyring 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2698946 ']' 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2698946 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@946 -- # '[' -z 2698946 ']' 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@950 -- # kill -0 2698946 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # uname 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2698946 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2698946' 00:06:27.226 killing process with pid 2698946 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # kill 2698946 00:06:27.226 [2024-05-15 10:46:43.368252] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:27.226 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@970 -- # wait 2698946 00:06:27.499 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:27.499 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:27.499 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:27.499 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:27.499 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:27.499 10:46:43 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:27.499 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:27.499 10:46:43 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.037 10:46:45 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:30.037 00:06:30.037 real 0m6.682s 00:06:30.037 user 0m7.950s 00:06:30.037 sys 0m2.338s 00:06:30.037 10:46:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:30.037 10:46:45 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:06:30.037 ************************************ 00:06:30.037 END TEST nvmf_referrals 00:06:30.037 ************************************ 00:06:30.037 10:46:45 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:30.037 10:46:45 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:30.037 10:46:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:30.037 10:46:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.037 ************************************ 00:06:30.037 START TEST nvmf_connect_disconnect 00:06:30.037 ************************************ 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:06:30.037 * Looking for test storage... 00:06:30.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:30.037 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:30.038 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:30.038 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.038 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:30.038 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:30.038 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:06:30.038 10:46:45 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:32.565 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:32.566 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:32.566 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:32.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:32.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:32.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:32.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:06:32.566 00:06:32.566 --- 10.0.0.2 ping statistics --- 00:06:32.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.566 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:32.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:32.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:06:32.566 00:06:32.566 --- 10.0.0.1 ping statistics --- 00:06:32.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:32.566 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:32.566 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2701524 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2701524 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@827 -- # '[' -z 2701524 ']' 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.567 [2024-05-15 10:46:48.411103] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:06:32.567 [2024-05-15 10:46:48.411197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:32.567 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.567 [2024-05-15 10:46:48.499623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.567 [2024-05-15 10:46:48.626366] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:32.567 [2024-05-15 10:46:48.626432] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:32.567 [2024-05-15 10:46:48.626449] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:32.567 [2024-05-15 10:46:48.626462] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:32.567 [2024-05-15 10:46:48.626474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:32.567 [2024-05-15 10:46:48.626536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.567 [2024-05-15 10:46:48.626572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.567 [2024-05-15 10:46:48.626625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.567 [2024-05-15 10:46:48.626629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # return 0 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.567 [2024-05-15 10:46:48.774528] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.567 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:32.825 [2024-05-15 10:46:48.826396] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:32.825 [2024-05-15 10:46:48.826658] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:06:32.825 10:46:48 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:06:35.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:38.639 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:41.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:43.697 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:46.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:46.227 rmmod nvme_tcp 00:06:46.227 rmmod nvme_fabrics 00:06:46.227 rmmod nvme_keyring 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2701524 ']' 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2701524 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2701524 ']' 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # kill -0 2701524 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # uname 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2701524 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2701524' 00:06:46.227 killing process with pid 2701524 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # kill 2701524 00:06:46.227 [2024-05-15 10:47:02.455033] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:46.227 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # wait 2701524 00:06:46.794 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:46.794 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:46.794 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:46.794 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:46.794 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:46.794 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:46.794 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:46.794 10:47:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.725 10:47:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:48.725 00:06:48.725 real 0m19.061s 00:06:48.725 user 0m55.936s 00:06:48.725 sys 0m3.485s 00:06:48.725 10:47:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:48.725 10:47:04 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:06:48.725 ************************************ 00:06:48.725 END TEST nvmf_connect_disconnect 00:06:48.725 ************************************ 00:06:48.725 10:47:04 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:48.725 10:47:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:48.725 10:47:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:48.725 10:47:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.725 ************************************ 00:06:48.725 START TEST nvmf_multitarget 00:06:48.725 ************************************ 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:06:48.725 * Looking for test storage... 00:06:48.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:06:48.725 10:47:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:51.259 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:51.259 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:06:51.259 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:51.260 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:51.260 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:51.260 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:51.260 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:51.260 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:51.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:51.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:06:51.520 00:06:51.520 --- 10.0.0.2 ping statistics --- 00:06:51.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.520 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:51.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:51.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:06:51.520 00:06:51.520 --- 10.0.0.1 ping statistics --- 00:06:51.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:51.520 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2705577 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2705577 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@827 -- # '[' -z 2705577 ']' 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:51.520 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:51.520 [2024-05-15 10:47:07.612592] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:06:51.520 [2024-05-15 10:47:07.612671] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:51.520 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.520 [2024-05-15 10:47:07.699443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:51.779 [2024-05-15 10:47:07.821336] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:51.779 [2024-05-15 10:47:07.821396] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:51.779 [2024-05-15 10:47:07.821412] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:51.779 [2024-05-15 10:47:07.821426] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:51.779 [2024-05-15 10:47:07.821438] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:51.779 [2024-05-15 10:47:07.821529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.779 [2024-05-15 10:47:07.821581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:51.779 [2024-05-15 10:47:07.821630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:51.779 [2024-05-15 10:47:07.821633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@860 -- # return 0 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:51.779 10:47:07 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:06:52.037 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:06:52.037 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:06:52.038 "nvmf_tgt_1" 00:06:52.038 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:06:52.296 "nvmf_tgt_2" 00:06:52.296 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:52.296 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:06:52.296 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:06:52.296 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:06:52.554 true 00:06:52.554 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:06:52.554 true 00:06:52.554 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:06:52.554 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:06:52.554 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:06:52.554 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:52.554 10:47:08 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:06:52.554 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:52.555 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:06:52.555 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:52.555 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:06:52.555 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:52.555 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:52.555 rmmod nvme_tcp 00:06:52.813 rmmod nvme_fabrics 00:06:52.813 rmmod nvme_keyring 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2705577 ']' 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2705577 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@946 -- # '[' -z 2705577 ']' 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@950 -- # kill -0 2705577 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # uname 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2705577 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2705577' 00:06:52.813 killing process with pid 2705577 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # kill 2705577 00:06:52.813 10:47:08 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@970 -- # wait 2705577 00:06:53.072 10:47:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:53.072 10:47:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:53.072 10:47:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:53.072 10:47:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:53.072 10:47:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:53.072 10:47:09 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.072 10:47:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.072 10:47:09 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.977 10:47:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:54.977 00:06:54.977 real 0m6.328s 00:06:54.977 user 0m6.748s 00:06:54.977 sys 0m2.297s 00:06:54.977 10:47:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # xtrace_disable 00:06:54.977 10:47:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:06:54.977 ************************************ 00:06:54.977 END TEST nvmf_multitarget 00:06:54.977 ************************************ 00:06:55.236 10:47:11 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:55.236 10:47:11 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:06:55.236 10:47:11 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:06:55.236 10:47:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:55.236 ************************************ 00:06:55.236 START TEST nvmf_rpc 00:06:55.236 ************************************ 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:06:55.236 * Looking for test storage... 00:06:55.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:06:55.236 10:47:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:57.765 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:57.765 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:57.765 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:57.765 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:57.765 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:57.766 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:57.766 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:06:57.766 00:06:57.766 --- 10.0.0.2 ping statistics --- 00:06:57.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.766 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:57.766 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:57.766 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:06:57.766 00:06:57.766 --- 10.0.0.1 ping statistics --- 00:06:57.766 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:57.766 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@720 -- # xtrace_disable 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2707969 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2707969 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@827 -- # '[' -z 2707969 ']' 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local max_retries=100 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # xtrace_disable 00:06:57.766 10:47:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.766 [2024-05-15 10:47:13.961022] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:06:57.766 [2024-05-15 10:47:13.961102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.024 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.024 [2024-05-15 10:47:14.040562] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.024 [2024-05-15 10:47:14.150557] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.024 [2024-05-15 10:47:14.150615] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.024 [2024-05-15 10:47:14.150628] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:58.024 [2024-05-15 10:47:14.150640] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:58.024 [2024-05-15 10:47:14.150649] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.024 [2024-05-15 10:47:14.150705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.024 [2024-05-15 10:47:14.150762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.024 [2024-05-15 10:47:14.150829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.024 [2024-05-15 10:47:14.150832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@860 -- # return 0 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:06:58.958 "tick_rate": 2700000000, 00:06:58.958 "poll_groups": [ 00:06:58.958 { 00:06:58.958 "name": "nvmf_tgt_poll_group_000", 00:06:58.958 "admin_qpairs": 0, 00:06:58.958 "io_qpairs": 0, 00:06:58.958 "current_admin_qpairs": 0, 00:06:58.958 "current_io_qpairs": 0, 00:06:58.958 "pending_bdev_io": 0, 00:06:58.958 "completed_nvme_io": 0, 00:06:58.958 "transports": [] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "name": "nvmf_tgt_poll_group_001", 00:06:58.958 "admin_qpairs": 0, 00:06:58.958 "io_qpairs": 0, 00:06:58.958 "current_admin_qpairs": 0, 00:06:58.958 "current_io_qpairs": 0, 00:06:58.958 "pending_bdev_io": 0, 00:06:58.958 "completed_nvme_io": 0, 00:06:58.958 "transports": [] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "name": "nvmf_tgt_poll_group_002", 00:06:58.958 "admin_qpairs": 0, 00:06:58.958 "io_qpairs": 0, 00:06:58.958 "current_admin_qpairs": 0, 00:06:58.958 "current_io_qpairs": 0, 00:06:58.958 "pending_bdev_io": 0, 00:06:58.958 "completed_nvme_io": 0, 00:06:58.958 "transports": [] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "name": "nvmf_tgt_poll_group_003", 00:06:58.958 "admin_qpairs": 0, 00:06:58.958 "io_qpairs": 0, 00:06:58.958 "current_admin_qpairs": 0, 00:06:58.958 "current_io_qpairs": 0, 00:06:58.958 "pending_bdev_io": 0, 00:06:58.958 "completed_nvme_io": 0, 00:06:58.958 "transports": [] 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 }' 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:06:58.958 10:47:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.958 [2024-05-15 10:47:15.083445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:06:58.958 "tick_rate": 2700000000, 00:06:58.958 "poll_groups": [ 00:06:58.958 { 00:06:58.958 "name": "nvmf_tgt_poll_group_000", 00:06:58.958 "admin_qpairs": 0, 00:06:58.958 "io_qpairs": 0, 00:06:58.958 "current_admin_qpairs": 0, 00:06:58.958 "current_io_qpairs": 0, 00:06:58.958 "pending_bdev_io": 0, 00:06:58.958 "completed_nvme_io": 0, 00:06:58.958 "transports": [ 00:06:58.958 { 00:06:58.958 "trtype": "TCP" 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "name": "nvmf_tgt_poll_group_001", 00:06:58.958 "admin_qpairs": 0, 00:06:58.958 "io_qpairs": 0, 00:06:58.958 "current_admin_qpairs": 0, 00:06:58.958 "current_io_qpairs": 0, 00:06:58.958 "pending_bdev_io": 0, 00:06:58.958 "completed_nvme_io": 0, 00:06:58.958 "transports": [ 00:06:58.958 { 00:06:58.958 "trtype": "TCP" 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "name": "nvmf_tgt_poll_group_002", 00:06:58.958 "admin_qpairs": 0, 00:06:58.958 "io_qpairs": 0, 00:06:58.958 "current_admin_qpairs": 0, 00:06:58.958 "current_io_qpairs": 0, 00:06:58.958 "pending_bdev_io": 0, 00:06:58.958 "completed_nvme_io": 0, 00:06:58.958 "transports": [ 00:06:58.958 { 00:06:58.958 "trtype": "TCP" 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 }, 00:06:58.958 { 00:06:58.958 "name": "nvmf_tgt_poll_group_003", 00:06:58.958 "admin_qpairs": 0, 00:06:58.958 "io_qpairs": 0, 00:06:58.958 "current_admin_qpairs": 0, 00:06:58.958 "current_io_qpairs": 0, 00:06:58.958 "pending_bdev_io": 0, 00:06:58.958 "completed_nvme_io": 0, 00:06:58.958 "transports": [ 00:06:58.958 { 00:06:58.958 "trtype": "TCP" 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 } 00:06:58.958 ] 00:06:58.958 }' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:58.958 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.217 Malloc1 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.217 [2024-05-15 10:47:15.239563] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:59.217 [2024-05-15 10:47:15.239876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:06:59.217 [2024-05-15 10:47:15.262515] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:06:59.217 Failed to write to /dev/nvme-fabrics: Input/output error 00:06:59.217 could not add new controller: failed to write to nvme-fabrics device 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.217 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:59.784 10:47:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:06:59.784 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:06:59.784 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:06:59.784 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:06:59.784 10:47:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:01.683 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:01.683 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:01.683 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:01.683 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:01.683 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:01.683 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:01.683 10:47:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:01.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:01.941 10:47:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:01.941 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:01.941 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:01.941 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.941 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:07:01.942 10:47:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:01.942 [2024-05-15 10:47:18.013910] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:07:01.942 Failed to write to /dev/nvme-fabrics: Input/output error 00:07:01.942 could not add new controller: failed to write to nvme-fabrics device 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.942 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:02.508 10:47:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:07:02.508 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:02.508 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:02.508 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:02.508 10:47:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:04.439 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:04.439 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:04.439 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:04.439 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:04.439 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:04.439 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:04.439 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:04.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 [2024-05-15 10:47:20.763468] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:04.698 10:47:20 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:05.263 10:47:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:05.263 10:47:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:05.263 10:47:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:05.263 10:47:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:05.263 10:47:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:07.162 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:07.162 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:07.162 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:07.162 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:07.162 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:07.162 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:07.162 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:07.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.420 [2024-05-15 10:47:23.449086] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.420 10:47:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.985 10:47:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.985 10:47:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:07.985 10:47:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.985 10:47:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:07.985 10:47:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:09.882 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:09.882 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:09.882 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.882 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:09.882 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.882 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:09.882 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:10.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.141 [2024-05-15 10:47:26.219725] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.141 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:10.706 10:47:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:10.706 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:10.706 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:10.706 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:10.706 10:47:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:12.610 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:12.610 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:12.610 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:12.610 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:12.610 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:12.610 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:12.610 10:47:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:12.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.868 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.869 [2024-05-15 10:47:28.992160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.869 10:47:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.869 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.869 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:12.869 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:12.869 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:12.869 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:12.869 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:13.434 10:47:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:13.434 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:13.434 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:13.434 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:13.434 10:47:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:15.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.962 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.963 [2024-05-15 10:47:31.714460] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:15.963 10:47:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:16.220 10:47:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:07:16.220 10:47:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1194 -- # local i=0 00:07:16.220 10:47:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:07:16.220 10:47:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:07:16.220 10:47:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1201 -- # sleep 2 00:07:18.118 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:07:18.118 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:07:18.118 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:07:18.118 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:07:18.118 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:07:18.118 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # return 0 00:07:18.118 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:18.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1215 -- # local i=0 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # return 0 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:18.376 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 [2024-05-15 10:47:34.414889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 [2024-05-15 10:47:34.462983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 [2024-05-15 10:47:34.511147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 [2024-05-15 10:47:34.559330] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.377 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.377 [2024-05-15 10:47:34.607511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:18.636 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:07:18.636 "tick_rate": 2700000000, 00:07:18.636 "poll_groups": [ 00:07:18.636 { 00:07:18.636 "name": "nvmf_tgt_poll_group_000", 00:07:18.636 "admin_qpairs": 2, 00:07:18.636 "io_qpairs": 84, 00:07:18.636 "current_admin_qpairs": 0, 00:07:18.636 "current_io_qpairs": 0, 00:07:18.636 "pending_bdev_io": 0, 00:07:18.636 "completed_nvme_io": 278, 00:07:18.636 "transports": [ 00:07:18.636 { 00:07:18.636 "trtype": "TCP" 00:07:18.636 } 00:07:18.636 ] 00:07:18.636 }, 00:07:18.636 { 00:07:18.636 "name": "nvmf_tgt_poll_group_001", 00:07:18.636 "admin_qpairs": 2, 00:07:18.636 "io_qpairs": 84, 00:07:18.636 "current_admin_qpairs": 0, 00:07:18.636 "current_io_qpairs": 0, 00:07:18.636 "pending_bdev_io": 0, 00:07:18.636 "completed_nvme_io": 164, 00:07:18.636 "transports": [ 00:07:18.636 { 00:07:18.636 "trtype": "TCP" 00:07:18.636 } 00:07:18.636 ] 00:07:18.636 }, 00:07:18.636 { 00:07:18.636 "name": "nvmf_tgt_poll_group_002", 00:07:18.636 "admin_qpairs": 1, 00:07:18.636 "io_qpairs": 84, 00:07:18.636 "current_admin_qpairs": 0, 00:07:18.636 "current_io_qpairs": 0, 00:07:18.636 "pending_bdev_io": 0, 00:07:18.636 "completed_nvme_io": 108, 00:07:18.637 "transports": [ 00:07:18.637 { 00:07:18.637 "trtype": "TCP" 00:07:18.637 } 00:07:18.637 ] 00:07:18.637 }, 00:07:18.637 { 00:07:18.637 "name": "nvmf_tgt_poll_group_003", 00:07:18.637 "admin_qpairs": 2, 00:07:18.637 "io_qpairs": 84, 00:07:18.637 "current_admin_qpairs": 0, 00:07:18.637 "current_io_qpairs": 0, 00:07:18.637 "pending_bdev_io": 0, 00:07:18.637 "completed_nvme_io": 136, 00:07:18.637 "transports": [ 00:07:18.637 { 00:07:18.637 "trtype": "TCP" 00:07:18.637 } 00:07:18.637 ] 00:07:18.637 } 00:07:18.637 ] 00:07:18.637 }' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:18.637 rmmod nvme_tcp 00:07:18.637 rmmod nvme_fabrics 00:07:18.637 rmmod nvme_keyring 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2707969 ']' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2707969 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@946 -- # '[' -z 2707969 ']' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@950 -- # kill -0 2707969 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # uname 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2707969 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2707969' 00:07:18.637 killing process with pid 2707969 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # kill 2707969 00:07:18.637 [2024-05-15 10:47:34.828231] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:18.637 10:47:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@970 -- # wait 2707969 00:07:19.204 10:47:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:19.204 10:47:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:19.204 10:47:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:19.204 10:47:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:19.204 10:47:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:19.204 10:47:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:19.204 10:47:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:19.204 10:47:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.114 10:47:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:21.114 00:07:21.114 real 0m25.937s 00:07:21.114 user 1m23.425s 00:07:21.114 sys 0m4.173s 00:07:21.114 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:21.114 10:47:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.114 ************************************ 00:07:21.114 END TEST nvmf_rpc 00:07:21.114 ************************************ 00:07:21.114 10:47:37 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:21.114 10:47:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:21.114 10:47:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:21.114 10:47:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:21.114 ************************************ 00:07:21.114 START TEST nvmf_invalid 00:07:21.114 ************************************ 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:07:21.114 * Looking for test storage... 00:07:21.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:21.114 10:47:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:21.115 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:21.115 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:21.115 10:47:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:07:21.115 10:47:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:23.685 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:23.685 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:23.685 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:23.685 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.685 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.686 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.686 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.686 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.686 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.686 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:07:23.944 00:07:23.944 --- 10.0.0.2 ping statistics --- 00:07:23.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.944 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:07:23.944 00:07:23.944 --- 10.0.0.1 ping statistics --- 00:07:23.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.944 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.944 10:47:39 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2712893 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2712893 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@827 -- # '[' -z 2712893 ']' 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:23.945 10:47:39 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:23.945 [2024-05-15 10:47:40.006522] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:07:23.945 [2024-05-15 10:47:40.006609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.945 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.945 [2024-05-15 10:47:40.085177] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.203 [2024-05-15 10:47:40.198950] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:24.203 [2024-05-15 10:47:40.199002] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:24.203 [2024-05-15 10:47:40.199016] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:24.203 [2024-05-15 10:47:40.199026] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:24.203 [2024-05-15 10:47:40.199036] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:24.203 [2024-05-15 10:47:40.199086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.203 [2024-05-15 10:47:40.199117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.203 [2024-05-15 10:47:40.199173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.203 [2024-05-15 10:47:40.199175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.203 10:47:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:24.203 10:47:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@860 -- # return 0 00:07:24.203 10:47:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:24.203 10:47:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:24.203 10:47:40 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:24.203 10:47:40 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:24.203 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:07:24.203 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6002 00:07:24.461 [2024-05-15 10:47:40.593464] nvmf_rpc.c: 391:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:07:24.461 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:07:24.461 { 00:07:24.461 "nqn": "nqn.2016-06.io.spdk:cnode6002", 00:07:24.461 "tgt_name": "foobar", 00:07:24.461 "method": "nvmf_create_subsystem", 00:07:24.461 "req_id": 1 00:07:24.461 } 00:07:24.461 Got JSON-RPC error response 00:07:24.461 response: 00:07:24.461 { 00:07:24.461 "code": -32603, 00:07:24.461 "message": "Unable to find target foobar" 00:07:24.461 }' 00:07:24.461 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:07:24.461 { 00:07:24.461 "nqn": "nqn.2016-06.io.spdk:cnode6002", 00:07:24.461 "tgt_name": "foobar", 00:07:24.461 "method": "nvmf_create_subsystem", 00:07:24.461 "req_id": 1 00:07:24.461 } 00:07:24.461 Got JSON-RPC error response 00:07:24.461 response: 00:07:24.461 { 00:07:24.461 "code": -32603, 00:07:24.461 "message": "Unable to find target foobar" 00:07:24.461 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:07:24.461 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:07:24.461 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode10219 00:07:24.719 [2024-05-15 10:47:40.850362] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10219: invalid serial number 'SPDKISFASTANDAWESOME' 00:07:24.719 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:07:24.719 { 00:07:24.719 "nqn": "nqn.2016-06.io.spdk:cnode10219", 00:07:24.719 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:24.719 "method": "nvmf_create_subsystem", 00:07:24.719 "req_id": 1 00:07:24.719 } 00:07:24.719 Got JSON-RPC error response 00:07:24.719 response: 00:07:24.719 { 00:07:24.719 "code": -32602, 00:07:24.719 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:24.719 }' 00:07:24.719 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:07:24.719 { 00:07:24.719 "nqn": "nqn.2016-06.io.spdk:cnode10219", 00:07:24.719 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:07:24.719 "method": "nvmf_create_subsystem", 00:07:24.719 "req_id": 1 00:07:24.719 } 00:07:24.719 Got JSON-RPC error response 00:07:24.719 response: 00:07:24.720 { 00:07:24.720 "code": -32602, 00:07:24.720 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:07:24.720 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:24.720 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:07:24.720 10:47:40 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode17127 00:07:24.978 [2024-05-15 10:47:41.103196] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17127: invalid model number 'SPDK_Controller' 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:07:24.978 { 00:07:24.978 "nqn": "nqn.2016-06.io.spdk:cnode17127", 00:07:24.978 "model_number": "SPDK_Controller\u001f", 00:07:24.978 "method": "nvmf_create_subsystem", 00:07:24.978 "req_id": 1 00:07:24.978 } 00:07:24.978 Got JSON-RPC error response 00:07:24.978 response: 00:07:24.978 { 00:07:24.978 "code": -32602, 00:07:24.978 "message": "Invalid MN SPDK_Controller\u001f" 00:07:24.978 }' 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:07:24.978 { 00:07:24.978 "nqn": "nqn.2016-06.io.spdk:cnode17127", 00:07:24.978 "model_number": "SPDK_Controller\u001f", 00:07:24.978 "method": "nvmf_create_subsystem", 00:07:24.978 "req_id": 1 00:07:24.978 } 00:07:24.978 Got JSON-RPC error response 00:07:24.978 response: 00:07:24.978 { 00:07:24.978 "code": -32602, 00:07:24.978 "message": "Invalid MN SPDK_Controller\u001f" 00:07:24.978 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:07:24.978 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ x == \- ]] 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'xfY%A<5=%5aX8&'\'',6Xr_F' 00:07:24.979 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'xfY%A<5=%5aX8&'\'',6Xr_F' nqn.2016-06.io.spdk:cnode23220 00:07:25.237 [2024-05-15 10:47:41.432356] nvmf_rpc.c: 408:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23220: invalid serial number 'xfY%A<5=%5aX8&',6Xr_F' 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:07:25.237 { 00:07:25.237 "nqn": "nqn.2016-06.io.spdk:cnode23220", 00:07:25.237 "serial_number": "xfY%A<5=%5aX8&'\'',6Xr_F", 00:07:25.237 "method": "nvmf_create_subsystem", 00:07:25.237 "req_id": 1 00:07:25.237 } 00:07:25.237 Got JSON-RPC error response 00:07:25.237 response: 00:07:25.237 { 00:07:25.237 "code": -32602, 00:07:25.237 "message": "Invalid SN xfY%A<5=%5aX8&'\'',6Xr_F" 00:07:25.237 }' 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:07:25.237 { 00:07:25.237 "nqn": "nqn.2016-06.io.spdk:cnode23220", 00:07:25.237 "serial_number": "xfY%A<5=%5aX8&',6Xr_F", 00:07:25.237 "method": "nvmf_create_subsystem", 00:07:25.237 "req_id": 1 00:07:25.237 } 00:07:25.237 Got JSON-RPC error response 00:07:25.237 response: 00:07:25.237 { 00:07:25.237 "code": -32602, 00:07:25.237 "message": "Invalid SN xfY%A<5=%5aX8&',6Xr_F" 00:07:25.237 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.237 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.238 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:25.497 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ ; == \- ]] 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ';uIP%vk[1kqk%0jvrPir]>J?^bF~HFhAer;p$O3Wj' 00:07:25.498 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ';uIP%vk[1kqk%0jvrPir]>J?^bF~HFhAer;p$O3Wj' nqn.2016-06.io.spdk:cnode13578 00:07:25.756 [2024-05-15 10:47:41.809560] nvmf_rpc.c: 417:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13578: invalid model number ';uIP%vk[1kqk%0jvrPir]>J?^bF~HFhAer;p$O3Wj' 00:07:25.756 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:07:25.756 { 00:07:25.756 "nqn": "nqn.2016-06.io.spdk:cnode13578", 00:07:25.756 "model_number": ";uIP%vk[1kqk%0jvrPir]>J?^bF~HFhAer;p$O3Wj", 00:07:25.756 "method": "nvmf_create_subsystem", 00:07:25.756 "req_id": 1 00:07:25.756 } 00:07:25.756 Got JSON-RPC error response 00:07:25.756 response: 00:07:25.756 { 00:07:25.756 "code": -32602, 00:07:25.757 "message": "Invalid MN ;uIP%vk[1kqk%0jvrPir]>J?^bF~HFhAer;p$O3Wj" 00:07:25.757 }' 00:07:25.757 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:07:25.757 { 00:07:25.757 "nqn": "nqn.2016-06.io.spdk:cnode13578", 00:07:25.757 "model_number": ";uIP%vk[1kqk%0jvrPir]>J?^bF~HFhAer;p$O3Wj", 00:07:25.757 "method": "nvmf_create_subsystem", 00:07:25.757 "req_id": 1 00:07:25.757 } 00:07:25.757 Got JSON-RPC error response 00:07:25.757 response: 00:07:25.757 { 00:07:25.757 "code": -32602, 00:07:25.757 "message": "Invalid MN ;uIP%vk[1kqk%0jvrPir]>J?^bF~HFhAer;p$O3Wj" 00:07:25.757 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:07:25.757 10:47:41 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:07:26.014 [2024-05-15 10:47:42.062491] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.014 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:07:26.272 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:07:26.272 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:07:26.272 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:07:26.272 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:07:26.272 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:07:26.530 [2024-05-15 10:47:42.552063] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:26.530 [2024-05-15 10:47:42.552164] nvmf_rpc.c: 789:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:07:26.530 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:07:26.530 { 00:07:26.530 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:26.530 "listen_address": { 00:07:26.530 "trtype": "tcp", 00:07:26.530 "traddr": "", 00:07:26.530 "trsvcid": "4421" 00:07:26.530 }, 00:07:26.530 "method": "nvmf_subsystem_remove_listener", 00:07:26.530 "req_id": 1 00:07:26.530 } 00:07:26.530 Got JSON-RPC error response 00:07:26.530 response: 00:07:26.530 { 00:07:26.530 "code": -32602, 00:07:26.530 "message": "Invalid parameters" 00:07:26.530 }' 00:07:26.530 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:07:26.530 { 00:07:26.530 "nqn": "nqn.2016-06.io.spdk:cnode", 00:07:26.530 "listen_address": { 00:07:26.530 "trtype": "tcp", 00:07:26.530 "traddr": "", 00:07:26.530 "trsvcid": "4421" 00:07:26.530 }, 00:07:26.530 "method": "nvmf_subsystem_remove_listener", 00:07:26.530 "req_id": 1 00:07:26.530 } 00:07:26.530 Got JSON-RPC error response 00:07:26.530 response: 00:07:26.530 { 00:07:26.530 "code": -32602, 00:07:26.530 "message": "Invalid parameters" 00:07:26.530 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:07:26.530 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6408 -i 0 00:07:26.787 [2024-05-15 10:47:42.804913] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6408: invalid cntlid range [0-65519] 00:07:26.787 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:07:26.788 { 00:07:26.788 "nqn": "nqn.2016-06.io.spdk:cnode6408", 00:07:26.788 "min_cntlid": 0, 00:07:26.788 "method": "nvmf_create_subsystem", 00:07:26.788 "req_id": 1 00:07:26.788 } 00:07:26.788 Got JSON-RPC error response 00:07:26.788 response: 00:07:26.788 { 00:07:26.788 "code": -32602, 00:07:26.788 "message": "Invalid cntlid range [0-65519]" 00:07:26.788 }' 00:07:26.788 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:07:26.788 { 00:07:26.788 "nqn": "nqn.2016-06.io.spdk:cnode6408", 00:07:26.788 "min_cntlid": 0, 00:07:26.788 "method": "nvmf_create_subsystem", 00:07:26.788 "req_id": 1 00:07:26.788 } 00:07:26.788 Got JSON-RPC error response 00:07:26.788 response: 00:07:26.788 { 00:07:26.788 "code": -32602, 00:07:26.788 "message": "Invalid cntlid range [0-65519]" 00:07:26.788 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:26.788 10:47:42 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26416 -i 65520 00:07:27.053 [2024-05-15 10:47:43.045739] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26416: invalid cntlid range [65520-65519] 00:07:27.053 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:07:27.053 { 00:07:27.053 "nqn": "nqn.2016-06.io.spdk:cnode26416", 00:07:27.053 "min_cntlid": 65520, 00:07:27.053 "method": "nvmf_create_subsystem", 00:07:27.053 "req_id": 1 00:07:27.053 } 00:07:27.053 Got JSON-RPC error response 00:07:27.053 response: 00:07:27.053 { 00:07:27.053 "code": -32602, 00:07:27.053 "message": "Invalid cntlid range [65520-65519]" 00:07:27.053 }' 00:07:27.053 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:07:27.053 { 00:07:27.053 "nqn": "nqn.2016-06.io.spdk:cnode26416", 00:07:27.053 "min_cntlid": 65520, 00:07:27.053 "method": "nvmf_create_subsystem", 00:07:27.053 "req_id": 1 00:07:27.053 } 00:07:27.053 Got JSON-RPC error response 00:07:27.053 response: 00:07:27.053 { 00:07:27.053 "code": -32602, 00:07:27.053 "message": "Invalid cntlid range [65520-65519]" 00:07:27.053 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:27.053 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13784 -I 0 00:07:27.318 [2024-05-15 10:47:43.286613] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13784: invalid cntlid range [1-0] 00:07:27.318 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:07:27.318 { 00:07:27.318 "nqn": "nqn.2016-06.io.spdk:cnode13784", 00:07:27.318 "max_cntlid": 0, 00:07:27.318 "method": "nvmf_create_subsystem", 00:07:27.318 "req_id": 1 00:07:27.318 } 00:07:27.318 Got JSON-RPC error response 00:07:27.318 response: 00:07:27.318 { 00:07:27.318 "code": -32602, 00:07:27.318 "message": "Invalid cntlid range [1-0]" 00:07:27.318 }' 00:07:27.318 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:07:27.318 { 00:07:27.318 "nqn": "nqn.2016-06.io.spdk:cnode13784", 00:07:27.318 "max_cntlid": 0, 00:07:27.318 "method": "nvmf_create_subsystem", 00:07:27.318 "req_id": 1 00:07:27.318 } 00:07:27.318 Got JSON-RPC error response 00:07:27.318 response: 00:07:27.318 { 00:07:27.318 "code": -32602, 00:07:27.318 "message": "Invalid cntlid range [1-0]" 00:07:27.318 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:27.318 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18057 -I 65520 00:07:27.318 [2024-05-15 10:47:43.527391] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18057: invalid cntlid range [1-65520] 00:07:27.318 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:07:27.318 { 00:07:27.318 "nqn": "nqn.2016-06.io.spdk:cnode18057", 00:07:27.318 "max_cntlid": 65520, 00:07:27.318 "method": "nvmf_create_subsystem", 00:07:27.318 "req_id": 1 00:07:27.318 } 00:07:27.318 Got JSON-RPC error response 00:07:27.318 response: 00:07:27.318 { 00:07:27.318 "code": -32602, 00:07:27.318 "message": "Invalid cntlid range [1-65520]" 00:07:27.318 }' 00:07:27.318 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:07:27.318 { 00:07:27.318 "nqn": "nqn.2016-06.io.spdk:cnode18057", 00:07:27.319 "max_cntlid": 65520, 00:07:27.319 "method": "nvmf_create_subsystem", 00:07:27.319 "req_id": 1 00:07:27.319 } 00:07:27.319 Got JSON-RPC error response 00:07:27.319 response: 00:07:27.319 { 00:07:27.319 "code": -32602, 00:07:27.319 "message": "Invalid cntlid range [1-65520]" 00:07:27.319 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:27.319 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6528 -i 6 -I 5 00:07:27.576 [2024-05-15 10:47:43.784303] nvmf_rpc.c: 429:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6528: invalid cntlid range [6-5] 00:07:27.576 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:07:27.576 { 00:07:27.576 "nqn": "nqn.2016-06.io.spdk:cnode6528", 00:07:27.576 "min_cntlid": 6, 00:07:27.576 "max_cntlid": 5, 00:07:27.576 "method": "nvmf_create_subsystem", 00:07:27.576 "req_id": 1 00:07:27.576 } 00:07:27.576 Got JSON-RPC error response 00:07:27.576 response: 00:07:27.576 { 00:07:27.576 "code": -32602, 00:07:27.576 "message": "Invalid cntlid range [6-5]" 00:07:27.576 }' 00:07:27.576 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:07:27.576 { 00:07:27.576 "nqn": "nqn.2016-06.io.spdk:cnode6528", 00:07:27.576 "min_cntlid": 6, 00:07:27.576 "max_cntlid": 5, 00:07:27.576 "method": "nvmf_create_subsystem", 00:07:27.576 "req_id": 1 00:07:27.576 } 00:07:27.576 Got JSON-RPC error response 00:07:27.576 response: 00:07:27.576 { 00:07:27.576 "code": -32602, 00:07:27.576 "message": "Invalid cntlid range [6-5]" 00:07:27.576 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:07:27.576 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:07:27.834 { 00:07:27.834 "name": "foobar", 00:07:27.834 "method": "nvmf_delete_target", 00:07:27.834 "req_id": 1 00:07:27.834 } 00:07:27.834 Got JSON-RPC error response 00:07:27.834 response: 00:07:27.834 { 00:07:27.834 "code": -32602, 00:07:27.834 "message": "The specified target doesn'\''t exist, cannot delete it." 00:07:27.834 }' 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:07:27.834 { 00:07:27.834 "name": "foobar", 00:07:27.834 "method": "nvmf_delete_target", 00:07:27.834 "req_id": 1 00:07:27.834 } 00:07:27.834 Got JSON-RPC error response 00:07:27.834 response: 00:07:27.834 { 00:07:27.834 "code": -32602, 00:07:27.834 "message": "The specified target doesn't exist, cannot delete it." 00:07:27.834 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:27.834 rmmod nvme_tcp 00:07:27.834 rmmod nvme_fabrics 00:07:27.834 rmmod nvme_keyring 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2712893 ']' 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2712893 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@946 -- # '[' -z 2712893 ']' 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@950 -- # kill -0 2712893 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # uname 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2712893 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2712893' 00:07:27.834 killing process with pid 2712893 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@965 -- # kill 2712893 00:07:27.834 [2024-05-15 10:47:43.995111] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:27.834 10:47:43 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@970 -- # wait 2712893 00:07:28.094 10:47:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:28.094 10:47:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:28.094 10:47:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:28.094 10:47:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:28.094 10:47:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:28.094 10:47:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.094 10:47:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:28.094 10:47:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.630 10:47:46 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:30.630 00:07:30.630 real 0m9.061s 00:07:30.630 user 0m19.836s 00:07:30.630 sys 0m2.722s 00:07:30.630 10:47:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:30.630 10:47:46 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:07:30.630 ************************************ 00:07:30.630 END TEST nvmf_invalid 00:07:30.630 ************************************ 00:07:30.630 10:47:46 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:30.630 10:47:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:30.630 10:47:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:30.630 10:47:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:30.630 ************************************ 00:07:30.630 START TEST nvmf_abort 00:07:30.630 ************************************ 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:30.630 * Looking for test storage... 00:07:30.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:30.630 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:30.631 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.631 10:47:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:30.631 10:47:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.631 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:30.631 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:30.631 10:47:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:30.631 10:47:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.162 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:33.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:33.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:33.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:33.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:33.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:07:33.163 00:07:33.163 --- 10.0.0.2 ping statistics --- 00:07:33.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.163 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:33.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:07:33.163 00:07:33.163 --- 10.0.0.1 ping statistics --- 00:07:33.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.163 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2715821 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2715821 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@827 -- # '[' -z 2715821 ']' 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:33.163 10:47:49 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:33.163 [2024-05-15 10:47:49.259877] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:07:33.163 [2024-05-15 10:47:49.259970] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.163 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.163 [2024-05-15 10:47:49.343561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:33.422 [2024-05-15 10:47:49.468864] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.422 [2024-05-15 10:47:49.468963] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.422 [2024-05-15 10:47:49.468991] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.422 [2024-05-15 10:47:49.469020] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.422 [2024-05-15 10:47:49.469039] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.422 [2024-05-15 10:47:49.469134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.422 [2024-05-15 10:47:49.469166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.422 [2024-05-15 10:47:49.469173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@860 -- # return 0 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.356 [2024-05-15 10:47:50.252121] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.356 Malloc0 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.356 Delay0 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.356 [2024-05-15 10:47:50.328652] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:34.356 [2024-05-15 10:47:50.329044] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:34.356 10:47:50 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:34.356 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.356 [2024-05-15 10:47:50.477153] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:36.886 Initializing NVMe Controllers 00:07:36.886 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:36.886 controller IO queue size 128 less than required 00:07:36.886 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:36.886 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:36.886 Initialization complete. Launching workers. 00:07:36.886 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29164 00:07:36.886 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29225, failed to submit 62 00:07:36.886 success 29168, unsuccess 57, failed 0 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:36.886 rmmod nvme_tcp 00:07:36.886 rmmod nvme_fabrics 00:07:36.886 rmmod nvme_keyring 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2715821 ']' 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2715821 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@946 -- # '[' -z 2715821 ']' 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@950 -- # kill -0 2715821 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # uname 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2715821 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2715821' 00:07:36.886 killing process with pid 2715821 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # kill 2715821 00:07:36.886 [2024-05-15 10:47:52.669534] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@970 -- # wait 2715821 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:36.886 10:47:52 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:38.824 10:47:55 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:38.824 00:07:38.824 real 0m8.659s 00:07:38.824 user 0m12.563s 00:07:38.824 sys 0m3.287s 00:07:38.824 10:47:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:07:38.824 10:47:55 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:38.824 ************************************ 00:07:38.824 END TEST nvmf_abort 00:07:38.824 ************************************ 00:07:38.824 10:47:55 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:38.824 10:47:55 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:07:38.824 10:47:55 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:07:38.824 10:47:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:39.083 ************************************ 00:07:39.083 START TEST nvmf_ns_hotplug_stress 00:07:39.083 ************************************ 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:39.083 * Looking for test storage... 00:07:39.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.083 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:39.084 10:47:55 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:41.618 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:41.618 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:41.618 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:41.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:41.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:41.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:07:41.619 00:07:41.619 --- 10.0.0.2 ping statistics --- 00:07:41.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.619 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:07:41.619 00:07:41.619 --- 10.0.0.1 ping statistics --- 00:07:41.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.619 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2718587 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2718587 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@827 -- # '[' -z 2718587 ']' 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:07:41.619 10:47:57 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:41.619 [2024-05-15 10:47:57.812190] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:07:41.619 [2024-05-15 10:47:57.812282] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.877 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.877 [2024-05-15 10:47:57.900342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.877 [2024-05-15 10:47:58.021559] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.877 [2024-05-15 10:47:58.021623] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.877 [2024-05-15 10:47:58.021648] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.877 [2024-05-15 10:47:58.021669] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.877 [2024-05-15 10:47:58.021687] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.877 [2024-05-15 10:47:58.021785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.878 [2024-05-15 10:47:58.021901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.878 [2024-05-15 10:47:58.021907] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.136 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:07:42.136 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # return 0 00:07:42.136 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:42.136 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:42.136 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:42.136 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:42.136 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:42.136 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:42.394 [2024-05-15 10:47:58.379137] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:42.394 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:42.651 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:42.651 [2024-05-15 10:47:58.869614] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:07:42.651 [2024-05-15 10:47:58.869879] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:42.908 10:47:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.908 10:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:43.166 Malloc0 00:07:43.166 10:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:43.424 Delay0 00:07:43.424 10:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.681 10:47:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:43.939 NULL1 00:07:43.939 10:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:44.196 10:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2718921 00:07:44.196 10:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:44.196 10:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:44.196 10:48:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.196 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.579 Read completed with error (sct=0, sc=11) 00:07:45.579 10:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.579 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.984 10:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:45.984 10:48:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:45.984 true 00:07:45.984 10:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:45.984 10:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.916 10:48:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.173 10:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:47.173 10:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:47.173 true 00:07:47.173 10:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:47.173 10:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.430 10:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.688 10:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:47.688 10:48:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:47.946 true 00:07:47.946 10:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:47.946 10:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.878 10:48:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.136 10:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:49.136 10:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:49.393 true 00:07:49.393 10:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:49.393 10:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.651 10:48:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.908 10:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:49.908 10:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:50.166 true 00:07:50.166 10:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:50.166 10:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.424 10:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.681 10:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:50.681 10:48:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:50.939 true 00:07:50.939 10:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:50.939 10:48:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.872 10:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.872 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.130 10:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:52.130 10:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:52.402 true 00:07:52.402 10:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:52.402 10:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.677 10:48:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.935 10:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:52.935 10:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:53.192 true 00:07:53.192 10:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:53.193 10:48:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.127 10:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.385 10:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:54.385 10:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:54.643 true 00:07:54.643 10:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:54.643 10:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.901 10:48:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.159 10:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:55.159 10:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:55.159 true 00:07:55.159 10:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:55.159 10:48:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.092 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.092 10:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.092 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:56.350 10:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:56.350 10:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:56.609 true 00:07:56.609 10:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:56.609 10:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.867 10:48:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.124 10:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:57.124 10:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:57.381 true 00:07:57.381 10:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:57.381 10:48:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.314 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:58.314 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:58.573 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:58.573 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:58.573 true 00:07:58.573 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:58.573 10:48:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.831 10:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:59.089 10:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:59.089 10:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:59.346 true 00:07:59.346 10:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:07:59.346 10:48:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:00.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.538 10:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:00.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:00.796 10:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:00.797 10:48:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:00.797 true 00:08:01.054 10:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:01.054 10:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.054 10:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:01.312 10:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:01.312 10:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:01.569 true 00:08:01.569 10:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:01.569 10:48:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.503 10:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:02.761 10:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:02.761 10:48:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:03.019 true 00:08:03.019 10:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:03.019 10:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.276 10:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:03.534 10:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:03.534 10:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:03.792 true 00:08:03.792 10:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:03.792 10:48:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.050 10:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.307 10:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:04.307 10:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:04.565 true 00:08:04.565 10:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:04.565 10:48:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.499 10:48:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.099 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:06.099 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:06.099 true 00:08:06.099 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:06.099 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.356 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.614 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:06.614 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:06.873 true 00:08:06.873 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:06.873 10:48:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.129 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.386 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:07.386 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:07.645 true 00:08:07.645 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:07.645 10:48:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.019 10:48:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.019 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:09.019 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:09.277 true 00:08:09.277 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:09.277 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.535 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.792 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:09.793 10:48:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:10.050 true 00:08:10.050 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:10.050 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.307 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.565 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:10.565 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:10.822 true 00:08:10.822 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:10.822 10:48:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.759 10:48:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.759 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:12.017 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:12.017 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:12.275 true 00:08:12.275 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:12.275 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.533 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.791 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:12.791 10:48:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:13.050 true 00:08:13.050 10:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:13.050 10:48:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.984 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.984 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:14.241 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:14.241 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:14.500 true 00:08:14.500 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:14.500 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.500 Initializing NVMe Controllers 00:08:14.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:14.500 Controller IO queue size 128, less than required. 00:08:14.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.500 Controller IO queue size 128, less than required. 00:08:14.500 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:14.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:14.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:14.500 Initialization complete. Launching workers. 00:08:14.500 ======================================================== 00:08:14.500 Latency(us) 00:08:14.500 Device Information : IOPS MiB/s Average min max 00:08:14.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 692.50 0.34 89217.26 2215.80 1094844.94 00:08:14.500 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10175.81 4.97 12578.57 2742.27 369423.18 00:08:14.500 ======================================================== 00:08:14.500 Total : 10868.32 5.31 17461.80 2215.80 1094844.94 00:08:14.500 00:08:14.758 10:48:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.015 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:08:15.016 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:08:15.273 true 00:08:15.273 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2718921 00:08:15.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2718921) - No such process 00:08:15.273 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2718921 00:08:15.273 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.531 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:15.790 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:15.790 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:15.790 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:15.790 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:15.790 10:48:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:16.047 null0 00:08:16.047 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.047 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.047 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:16.305 null1 00:08:16.305 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.305 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.305 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:16.562 null2 00:08:16.562 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.562 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.562 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:16.820 null3 00:08:16.820 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:16.820 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:16.820 10:48:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:17.077 null4 00:08:17.077 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.077 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.077 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:17.334 null5 00:08:17.334 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.334 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.334 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:17.334 null6 00:08:17.334 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.334 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.334 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:17.592 null7 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2723560 2723561 2723563 2723565 2723567 2723569 2723571 2723573 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:17.592 10:48:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:17.849 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:17.849 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.108 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.108 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.108 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.108 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.108 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.108 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.366 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.366 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.366 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.366 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.366 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.366 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.367 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:18.624 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.624 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:18.624 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:18.624 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:18.625 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:18.625 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:18.625 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:18.625 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:18.882 10:48:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.163 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.163 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.163 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.163 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.163 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.163 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.163 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.163 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.421 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.679 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:19.679 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:19.679 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.679 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:19.679 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:19.679 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:19.679 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:19.679 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:19.936 10:48:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.194 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.194 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.194 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.194 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.194 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.194 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.194 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.194 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.451 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.709 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:20.709 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.709 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:20.709 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:20.709 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:20.709 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:20.709 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:20.709 10:48:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:20.966 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.223 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.223 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.223 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.223 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.223 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.223 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.223 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.223 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.480 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.737 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.737 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:21.737 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:21.737 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:21.737 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:21.737 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:21.737 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:21.737 10:48:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:21.994 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.251 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.251 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.251 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.251 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.251 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.251 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.251 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:22.251 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:22.509 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:22.767 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.767 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:22.767 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:22.767 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:22.767 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:22.767 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:22.767 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:22.767 10:48:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:23.025 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.025 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.025 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.025 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.025 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.025 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.284 rmmod nvme_tcp 00:08:23.284 rmmod nvme_fabrics 00:08:23.284 rmmod nvme_keyring 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2718587 ']' 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2718587 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@946 -- # '[' -z 2718587 ']' 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # kill -0 2718587 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # uname 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2718587 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:23.284 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:23.285 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2718587' 00:08:23.285 killing process with pid 2718587 00:08:23.285 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # kill 2718587 00:08:23.285 [2024-05-15 10:48:39.367489] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:23.285 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # wait 2718587 00:08:23.543 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.543 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.543 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.543 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.543 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.543 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.543 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:23.543 10:48:39 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.074 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:26.074 00:08:26.074 real 0m46.633s 00:08:26.074 user 3m29.932s 00:08:26.074 sys 0m16.322s 00:08:26.074 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:26.074 10:48:41 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:26.074 ************************************ 00:08:26.074 END TEST nvmf_ns_hotplug_stress 00:08:26.074 ************************************ 00:08:26.074 10:48:41 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:26.074 10:48:41 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:26.074 10:48:41 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:26.074 10:48:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.074 ************************************ 00:08:26.074 START TEST nvmf_connect_stress 00:08:26.074 ************************************ 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:08:26.074 * Looking for test storage... 00:08:26.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:08:26.074 10:48:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:28.603 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:28.603 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:28.603 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:28.604 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:28.604 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:28.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:28.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:08:28.604 00:08:28.604 --- 10.0.0.2 ping statistics --- 00:08:28.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.604 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:28.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:28.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:08:28.604 00:08:28.604 --- 10.0.0.1 ping statistics --- 00:08:28.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:28.604 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2726740 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2726740 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@827 -- # '[' -z 2726740 ']' 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:28.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:28.604 10:48:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:28.604 [2024-05-15 10:48:44.561828] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:28.604 [2024-05-15 10:48:44.561914] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:28.604 EAL: No free 2048 kB hugepages reported on node 1 00:08:28.604 [2024-05-15 10:48:44.642936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:28.604 [2024-05-15 10:48:44.758978] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:28.604 [2024-05-15 10:48:44.759050] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:28.604 [2024-05-15 10:48:44.759070] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:28.604 [2024-05-15 10:48:44.759086] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:28.604 [2024-05-15 10:48:44.759100] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:28.604 [2024-05-15 10:48:44.759158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.604 [2024-05-15 10:48:44.759229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:28.604 [2024-05-15 10:48:44.759235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@860 -- # return 0 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.538 [2024-05-15 10:48:45.540716] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.538 [2024-05-15 10:48:45.557787] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:29.538 [2024-05-15 10:48:45.571082] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.538 NULL1 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2726894 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.538 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.539 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:29.796 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:29.796 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:29.797 10:48:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:29.797 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:29.797 10:48:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.055 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.055 10:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:30.055 10:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.055 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.055 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.621 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.621 10:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:30.621 10:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.621 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.621 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:30.879 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.879 10:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:30.879 10:48:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:30.879 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.879 10:48:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.136 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.136 10:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:31.136 10:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.136 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.136 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.395 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.395 10:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:31.395 10:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.395 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.395 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:31.652 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:31.653 10:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:31.653 10:48:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:31.653 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:31.653 10:48:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.218 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.218 10:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:32.218 10:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.218 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.218 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.476 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.476 10:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:32.476 10:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.476 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.476 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.734 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.734 10:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:32.734 10:48:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.734 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.734 10:48:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:32.992 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:32.992 10:48:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:32.992 10:48:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:32.992 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:32.992 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.559 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.559 10:48:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:33.559 10:48:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.559 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.559 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:33.847 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:33.847 10:48:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:33.847 10:48:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:33.847 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:33.847 10:48:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.111 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.111 10:48:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:34.111 10:48:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.111 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.111 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.371 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.371 10:48:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:34.371 10:48:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.371 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.371 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.628 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.628 10:48:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:34.628 10:48:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.628 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.628 10:48:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:34.886 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:34.886 10:48:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:34.886 10:48:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:34.886 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:34.886 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.452 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.452 10:48:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:35.452 10:48:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.452 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.452 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.711 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.711 10:48:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:35.711 10:48:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.711 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.711 10:48:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:35.969 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:35.969 10:48:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:35.969 10:48:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:35.969 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:35.969 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.227 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.227 10:48:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:36.227 10:48:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.227 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.227 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:36.485 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.485 10:48:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:36.485 10:48:52 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:36.485 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.485 10:48:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.050 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.050 10:48:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:37.050 10:48:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.050 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.050 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.308 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.308 10:48:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:37.308 10:48:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.308 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.308 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.566 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.566 10:48:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:37.566 10:48:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.566 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.566 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:37.824 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.824 10:48:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:37.824 10:48:53 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:37.824 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.824 10:48:53 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.082 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.082 10:48:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:38.082 10:48:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.082 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.082 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.647 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.647 10:48:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:38.647 10:48:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.647 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.647 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:38.905 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:38.905 10:48:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:38.905 10:48:54 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:38.905 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:38.905 10:48:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.163 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.163 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:39.163 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.163 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.163 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.421 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.421 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:39.421 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:08:39.421 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:39.421 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:39.680 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:39.680 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:39.680 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2726894 00:08:39.680 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2726894) - No such process 00:08:39.680 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2726894 00:08:39.680 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:39.938 rmmod nvme_tcp 00:08:39.938 rmmod nvme_fabrics 00:08:39.938 rmmod nvme_keyring 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2726740 ']' 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2726740 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@946 -- # '[' -z 2726740 ']' 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@950 -- # kill -0 2726740 00:08:39.938 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # uname 00:08:39.939 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:39.939 10:48:55 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2726740 00:08:39.939 10:48:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:39.939 10:48:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:39.939 10:48:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2726740' 00:08:39.939 killing process with pid 2726740 00:08:39.939 10:48:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # kill 2726740 00:08:39.939 [2024-05-15 10:48:56.008479] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:39.939 10:48:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@970 -- # wait 2726740 00:08:40.197 10:48:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:40.197 10:48:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:40.197 10:48:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:40.197 10:48:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:40.197 10:48:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:40.197 10:48:56 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.197 10:48:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.197 10:48:56 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.102 10:48:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:42.361 00:08:42.361 real 0m16.575s 00:08:42.361 user 0m40.616s 00:08:42.361 sys 0m6.459s 00:08:42.361 10:48:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:42.361 10:48:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:08:42.361 ************************************ 00:08:42.361 END TEST nvmf_connect_stress 00:08:42.361 ************************************ 00:08:42.361 10:48:58 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:42.361 10:48:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:42.361 10:48:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:42.361 10:48:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:42.361 ************************************ 00:08:42.361 START TEST nvmf_fused_ordering 00:08:42.361 ************************************ 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:08:42.361 * Looking for test storage... 00:08:42.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:42.361 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:08:42.362 10:48:58 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:44.892 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:44.892 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.892 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:44.893 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:44.893 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:44.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:08:44.893 00:08:44.893 --- 10.0.0.2 ping statistics --- 00:08:44.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.893 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:44.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:08:44.893 00:08:44.893 --- 10.0.0.1 ping statistics --- 00:08:44.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.893 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2730339 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2730339 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@827 -- # '[' -z 2730339 ']' 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:44.893 10:49:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:44.893 [2024-05-15 10:49:01.028731] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:44.893 [2024-05-15 10:49:01.028814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.893 EAL: No free 2048 kB hugepages reported on node 1 00:08:44.893 [2024-05-15 10:49:01.110355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.152 [2024-05-15 10:49:01.233675] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.152 [2024-05-15 10:49:01.233734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.152 [2024-05-15 10:49:01.233759] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.152 [2024-05-15 10:49:01.233779] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.152 [2024-05-15 10:49:01.233797] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.152 [2024-05-15 10:49:01.233838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # return 0 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 [2024-05-15 10:49:01.994972] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.086 10:49:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 [2024-05-15 10:49:02.010916] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:46.086 [2024-05-15 10:49:02.011223] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 NULL1 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.086 10:49:02 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:46.087 [2024-05-15 10:49:02.056478] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:46.087 [2024-05-15 10:49:02.056521] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2730493 ] 00:08:46.087 EAL: No free 2048 kB hugepages reported on node 1 00:08:47.022 Attached to nqn.2016-06.io.spdk:cnode1 00:08:47.022 Namespace ID: 1 size: 1GB 00:08:47.022 fused_ordering(0) 00:08:47.022 fused_ordering(1) 00:08:47.022 fused_ordering(2) 00:08:47.022 fused_ordering(3) 00:08:47.022 fused_ordering(4) 00:08:47.022 fused_ordering(5) 00:08:47.022 fused_ordering(6) 00:08:47.022 fused_ordering(7) 00:08:47.022 fused_ordering(8) 00:08:47.022 fused_ordering(9) 00:08:47.022 fused_ordering(10) 00:08:47.022 fused_ordering(11) 00:08:47.022 fused_ordering(12) 00:08:47.022 fused_ordering(13) 00:08:47.022 fused_ordering(14) 00:08:47.022 fused_ordering(15) 00:08:47.022 fused_ordering(16) 00:08:47.022 fused_ordering(17) 00:08:47.022 fused_ordering(18) 00:08:47.022 fused_ordering(19) 00:08:47.022 fused_ordering(20) 00:08:47.022 fused_ordering(21) 00:08:47.022 fused_ordering(22) 00:08:47.022 fused_ordering(23) 00:08:47.022 fused_ordering(24) 00:08:47.022 fused_ordering(25) 00:08:47.022 fused_ordering(26) 00:08:47.022 fused_ordering(27) 00:08:47.022 fused_ordering(28) 00:08:47.022 fused_ordering(29) 00:08:47.022 fused_ordering(30) 00:08:47.022 fused_ordering(31) 00:08:47.022 fused_ordering(32) 00:08:47.022 fused_ordering(33) 00:08:47.022 fused_ordering(34) 00:08:47.022 fused_ordering(35) 00:08:47.022 fused_ordering(36) 00:08:47.022 fused_ordering(37) 00:08:47.022 fused_ordering(38) 00:08:47.022 fused_ordering(39) 00:08:47.022 fused_ordering(40) 00:08:47.022 fused_ordering(41) 00:08:47.022 fused_ordering(42) 00:08:47.022 fused_ordering(43) 00:08:47.022 fused_ordering(44) 00:08:47.022 fused_ordering(45) 00:08:47.022 fused_ordering(46) 00:08:47.022 fused_ordering(47) 00:08:47.022 fused_ordering(48) 00:08:47.022 fused_ordering(49) 00:08:47.022 fused_ordering(50) 00:08:47.022 fused_ordering(51) 00:08:47.022 fused_ordering(52) 00:08:47.022 fused_ordering(53) 00:08:47.022 fused_ordering(54) 00:08:47.022 fused_ordering(55) 00:08:47.022 fused_ordering(56) 00:08:47.022 fused_ordering(57) 00:08:47.022 fused_ordering(58) 00:08:47.022 fused_ordering(59) 00:08:47.022 fused_ordering(60) 00:08:47.022 fused_ordering(61) 00:08:47.022 fused_ordering(62) 00:08:47.022 fused_ordering(63) 00:08:47.022 fused_ordering(64) 00:08:47.022 fused_ordering(65) 00:08:47.022 fused_ordering(66) 00:08:47.022 fused_ordering(67) 00:08:47.022 fused_ordering(68) 00:08:47.022 fused_ordering(69) 00:08:47.022 fused_ordering(70) 00:08:47.022 fused_ordering(71) 00:08:47.022 fused_ordering(72) 00:08:47.022 fused_ordering(73) 00:08:47.022 fused_ordering(74) 00:08:47.022 fused_ordering(75) 00:08:47.022 fused_ordering(76) 00:08:47.022 fused_ordering(77) 00:08:47.022 fused_ordering(78) 00:08:47.022 fused_ordering(79) 00:08:47.022 fused_ordering(80) 00:08:47.022 fused_ordering(81) 00:08:47.022 fused_ordering(82) 00:08:47.022 fused_ordering(83) 00:08:47.022 fused_ordering(84) 00:08:47.022 fused_ordering(85) 00:08:47.022 fused_ordering(86) 00:08:47.022 fused_ordering(87) 00:08:47.022 fused_ordering(88) 00:08:47.022 fused_ordering(89) 00:08:47.022 fused_ordering(90) 00:08:47.022 fused_ordering(91) 00:08:47.022 fused_ordering(92) 00:08:47.022 fused_ordering(93) 00:08:47.022 fused_ordering(94) 00:08:47.022 fused_ordering(95) 00:08:47.022 fused_ordering(96) 00:08:47.022 fused_ordering(97) 00:08:47.022 fused_ordering(98) 00:08:47.022 fused_ordering(99) 00:08:47.022 fused_ordering(100) 00:08:47.022 fused_ordering(101) 00:08:47.022 fused_ordering(102) 00:08:47.022 fused_ordering(103) 00:08:47.022 fused_ordering(104) 00:08:47.022 fused_ordering(105) 00:08:47.022 fused_ordering(106) 00:08:47.022 fused_ordering(107) 00:08:47.022 fused_ordering(108) 00:08:47.022 fused_ordering(109) 00:08:47.022 fused_ordering(110) 00:08:47.022 fused_ordering(111) 00:08:47.022 fused_ordering(112) 00:08:47.022 fused_ordering(113) 00:08:47.022 fused_ordering(114) 00:08:47.022 fused_ordering(115) 00:08:47.022 fused_ordering(116) 00:08:47.022 fused_ordering(117) 00:08:47.022 fused_ordering(118) 00:08:47.022 fused_ordering(119) 00:08:47.022 fused_ordering(120) 00:08:47.022 fused_ordering(121) 00:08:47.022 fused_ordering(122) 00:08:47.022 fused_ordering(123) 00:08:47.022 fused_ordering(124) 00:08:47.022 fused_ordering(125) 00:08:47.022 fused_ordering(126) 00:08:47.022 fused_ordering(127) 00:08:47.022 fused_ordering(128) 00:08:47.022 fused_ordering(129) 00:08:47.022 fused_ordering(130) 00:08:47.022 fused_ordering(131) 00:08:47.022 fused_ordering(132) 00:08:47.022 fused_ordering(133) 00:08:47.022 fused_ordering(134) 00:08:47.022 fused_ordering(135) 00:08:47.022 fused_ordering(136) 00:08:47.022 fused_ordering(137) 00:08:47.022 fused_ordering(138) 00:08:47.022 fused_ordering(139) 00:08:47.022 fused_ordering(140) 00:08:47.022 fused_ordering(141) 00:08:47.022 fused_ordering(142) 00:08:47.022 fused_ordering(143) 00:08:47.022 fused_ordering(144) 00:08:47.022 fused_ordering(145) 00:08:47.022 fused_ordering(146) 00:08:47.022 fused_ordering(147) 00:08:47.022 fused_ordering(148) 00:08:47.022 fused_ordering(149) 00:08:47.022 fused_ordering(150) 00:08:47.022 fused_ordering(151) 00:08:47.022 fused_ordering(152) 00:08:47.022 fused_ordering(153) 00:08:47.022 fused_ordering(154) 00:08:47.022 fused_ordering(155) 00:08:47.022 fused_ordering(156) 00:08:47.022 fused_ordering(157) 00:08:47.022 fused_ordering(158) 00:08:47.022 fused_ordering(159) 00:08:47.022 fused_ordering(160) 00:08:47.022 fused_ordering(161) 00:08:47.022 fused_ordering(162) 00:08:47.022 fused_ordering(163) 00:08:47.022 fused_ordering(164) 00:08:47.022 fused_ordering(165) 00:08:47.023 fused_ordering(166) 00:08:47.023 fused_ordering(167) 00:08:47.023 fused_ordering(168) 00:08:47.023 fused_ordering(169) 00:08:47.023 fused_ordering(170) 00:08:47.023 fused_ordering(171) 00:08:47.023 fused_ordering(172) 00:08:47.023 fused_ordering(173) 00:08:47.023 fused_ordering(174) 00:08:47.023 fused_ordering(175) 00:08:47.023 fused_ordering(176) 00:08:47.023 fused_ordering(177) 00:08:47.023 fused_ordering(178) 00:08:47.023 fused_ordering(179) 00:08:47.023 fused_ordering(180) 00:08:47.023 fused_ordering(181) 00:08:47.023 fused_ordering(182) 00:08:47.023 fused_ordering(183) 00:08:47.023 fused_ordering(184) 00:08:47.023 fused_ordering(185) 00:08:47.023 fused_ordering(186) 00:08:47.023 fused_ordering(187) 00:08:47.023 fused_ordering(188) 00:08:47.023 fused_ordering(189) 00:08:47.023 fused_ordering(190) 00:08:47.023 fused_ordering(191) 00:08:47.023 fused_ordering(192) 00:08:47.023 fused_ordering(193) 00:08:47.023 fused_ordering(194) 00:08:47.023 fused_ordering(195) 00:08:47.023 fused_ordering(196) 00:08:47.023 fused_ordering(197) 00:08:47.023 fused_ordering(198) 00:08:47.023 fused_ordering(199) 00:08:47.023 fused_ordering(200) 00:08:47.023 fused_ordering(201) 00:08:47.023 fused_ordering(202) 00:08:47.023 fused_ordering(203) 00:08:47.023 fused_ordering(204) 00:08:47.023 fused_ordering(205) 00:08:47.590 fused_ordering(206) 00:08:47.590 fused_ordering(207) 00:08:47.590 fused_ordering(208) 00:08:47.590 fused_ordering(209) 00:08:47.590 fused_ordering(210) 00:08:47.590 fused_ordering(211) 00:08:47.590 fused_ordering(212) 00:08:47.590 fused_ordering(213) 00:08:47.590 fused_ordering(214) 00:08:47.590 fused_ordering(215) 00:08:47.590 fused_ordering(216) 00:08:47.590 fused_ordering(217) 00:08:47.590 fused_ordering(218) 00:08:47.590 fused_ordering(219) 00:08:47.590 fused_ordering(220) 00:08:47.590 fused_ordering(221) 00:08:47.590 fused_ordering(222) 00:08:47.590 fused_ordering(223) 00:08:47.590 fused_ordering(224) 00:08:47.590 fused_ordering(225) 00:08:47.590 fused_ordering(226) 00:08:47.590 fused_ordering(227) 00:08:47.590 fused_ordering(228) 00:08:47.590 fused_ordering(229) 00:08:47.590 fused_ordering(230) 00:08:47.590 fused_ordering(231) 00:08:47.590 fused_ordering(232) 00:08:47.590 fused_ordering(233) 00:08:47.590 fused_ordering(234) 00:08:47.590 fused_ordering(235) 00:08:47.590 fused_ordering(236) 00:08:47.590 fused_ordering(237) 00:08:47.590 fused_ordering(238) 00:08:47.590 fused_ordering(239) 00:08:47.590 fused_ordering(240) 00:08:47.590 fused_ordering(241) 00:08:47.590 fused_ordering(242) 00:08:47.590 fused_ordering(243) 00:08:47.590 fused_ordering(244) 00:08:47.590 fused_ordering(245) 00:08:47.590 fused_ordering(246) 00:08:47.590 fused_ordering(247) 00:08:47.590 fused_ordering(248) 00:08:47.590 fused_ordering(249) 00:08:47.590 fused_ordering(250) 00:08:47.590 fused_ordering(251) 00:08:47.590 fused_ordering(252) 00:08:47.590 fused_ordering(253) 00:08:47.590 fused_ordering(254) 00:08:47.590 fused_ordering(255) 00:08:47.590 fused_ordering(256) 00:08:47.590 fused_ordering(257) 00:08:47.590 fused_ordering(258) 00:08:47.590 fused_ordering(259) 00:08:47.590 fused_ordering(260) 00:08:47.590 fused_ordering(261) 00:08:47.590 fused_ordering(262) 00:08:47.590 fused_ordering(263) 00:08:47.590 fused_ordering(264) 00:08:47.590 fused_ordering(265) 00:08:47.590 fused_ordering(266) 00:08:47.590 fused_ordering(267) 00:08:47.590 fused_ordering(268) 00:08:47.590 fused_ordering(269) 00:08:47.590 fused_ordering(270) 00:08:47.590 fused_ordering(271) 00:08:47.590 fused_ordering(272) 00:08:47.590 fused_ordering(273) 00:08:47.590 fused_ordering(274) 00:08:47.590 fused_ordering(275) 00:08:47.590 fused_ordering(276) 00:08:47.590 fused_ordering(277) 00:08:47.590 fused_ordering(278) 00:08:47.590 fused_ordering(279) 00:08:47.590 fused_ordering(280) 00:08:47.590 fused_ordering(281) 00:08:47.590 fused_ordering(282) 00:08:47.590 fused_ordering(283) 00:08:47.590 fused_ordering(284) 00:08:47.590 fused_ordering(285) 00:08:47.590 fused_ordering(286) 00:08:47.590 fused_ordering(287) 00:08:47.590 fused_ordering(288) 00:08:47.590 fused_ordering(289) 00:08:47.590 fused_ordering(290) 00:08:47.590 fused_ordering(291) 00:08:47.590 fused_ordering(292) 00:08:47.590 fused_ordering(293) 00:08:47.590 fused_ordering(294) 00:08:47.590 fused_ordering(295) 00:08:47.590 fused_ordering(296) 00:08:47.590 fused_ordering(297) 00:08:47.590 fused_ordering(298) 00:08:47.590 fused_ordering(299) 00:08:47.590 fused_ordering(300) 00:08:47.590 fused_ordering(301) 00:08:47.590 fused_ordering(302) 00:08:47.590 fused_ordering(303) 00:08:47.590 fused_ordering(304) 00:08:47.590 fused_ordering(305) 00:08:47.590 fused_ordering(306) 00:08:47.590 fused_ordering(307) 00:08:47.590 fused_ordering(308) 00:08:47.590 fused_ordering(309) 00:08:47.590 fused_ordering(310) 00:08:47.590 fused_ordering(311) 00:08:47.590 fused_ordering(312) 00:08:47.590 fused_ordering(313) 00:08:47.590 fused_ordering(314) 00:08:47.590 fused_ordering(315) 00:08:47.590 fused_ordering(316) 00:08:47.590 fused_ordering(317) 00:08:47.590 fused_ordering(318) 00:08:47.590 fused_ordering(319) 00:08:47.590 fused_ordering(320) 00:08:47.590 fused_ordering(321) 00:08:47.590 fused_ordering(322) 00:08:47.590 fused_ordering(323) 00:08:47.590 fused_ordering(324) 00:08:47.590 fused_ordering(325) 00:08:47.590 fused_ordering(326) 00:08:47.590 fused_ordering(327) 00:08:47.590 fused_ordering(328) 00:08:47.590 fused_ordering(329) 00:08:47.590 fused_ordering(330) 00:08:47.590 fused_ordering(331) 00:08:47.590 fused_ordering(332) 00:08:47.590 fused_ordering(333) 00:08:47.590 fused_ordering(334) 00:08:47.590 fused_ordering(335) 00:08:47.590 fused_ordering(336) 00:08:47.590 fused_ordering(337) 00:08:47.590 fused_ordering(338) 00:08:47.590 fused_ordering(339) 00:08:47.590 fused_ordering(340) 00:08:47.590 fused_ordering(341) 00:08:47.590 fused_ordering(342) 00:08:47.590 fused_ordering(343) 00:08:47.590 fused_ordering(344) 00:08:47.590 fused_ordering(345) 00:08:47.590 fused_ordering(346) 00:08:47.590 fused_ordering(347) 00:08:47.590 fused_ordering(348) 00:08:47.590 fused_ordering(349) 00:08:47.590 fused_ordering(350) 00:08:47.590 fused_ordering(351) 00:08:47.590 fused_ordering(352) 00:08:47.590 fused_ordering(353) 00:08:47.590 fused_ordering(354) 00:08:47.590 fused_ordering(355) 00:08:47.590 fused_ordering(356) 00:08:47.590 fused_ordering(357) 00:08:47.590 fused_ordering(358) 00:08:47.590 fused_ordering(359) 00:08:47.590 fused_ordering(360) 00:08:47.590 fused_ordering(361) 00:08:47.590 fused_ordering(362) 00:08:47.590 fused_ordering(363) 00:08:47.590 fused_ordering(364) 00:08:47.590 fused_ordering(365) 00:08:47.590 fused_ordering(366) 00:08:47.590 fused_ordering(367) 00:08:47.590 fused_ordering(368) 00:08:47.590 fused_ordering(369) 00:08:47.590 fused_ordering(370) 00:08:47.590 fused_ordering(371) 00:08:47.590 fused_ordering(372) 00:08:47.590 fused_ordering(373) 00:08:47.590 fused_ordering(374) 00:08:47.590 fused_ordering(375) 00:08:47.590 fused_ordering(376) 00:08:47.590 fused_ordering(377) 00:08:47.590 fused_ordering(378) 00:08:47.590 fused_ordering(379) 00:08:47.590 fused_ordering(380) 00:08:47.590 fused_ordering(381) 00:08:47.590 fused_ordering(382) 00:08:47.590 fused_ordering(383) 00:08:47.590 fused_ordering(384) 00:08:47.590 fused_ordering(385) 00:08:47.590 fused_ordering(386) 00:08:47.590 fused_ordering(387) 00:08:47.590 fused_ordering(388) 00:08:47.590 fused_ordering(389) 00:08:47.590 fused_ordering(390) 00:08:47.590 fused_ordering(391) 00:08:47.590 fused_ordering(392) 00:08:47.590 fused_ordering(393) 00:08:47.590 fused_ordering(394) 00:08:47.590 fused_ordering(395) 00:08:47.590 fused_ordering(396) 00:08:47.590 fused_ordering(397) 00:08:47.590 fused_ordering(398) 00:08:47.590 fused_ordering(399) 00:08:47.590 fused_ordering(400) 00:08:47.590 fused_ordering(401) 00:08:47.590 fused_ordering(402) 00:08:47.590 fused_ordering(403) 00:08:47.590 fused_ordering(404) 00:08:47.590 fused_ordering(405) 00:08:47.590 fused_ordering(406) 00:08:47.590 fused_ordering(407) 00:08:47.590 fused_ordering(408) 00:08:47.590 fused_ordering(409) 00:08:47.590 fused_ordering(410) 00:08:48.543 fused_ordering(411) 00:08:48.543 fused_ordering(412) 00:08:48.543 fused_ordering(413) 00:08:48.543 fused_ordering(414) 00:08:48.543 fused_ordering(415) 00:08:48.543 fused_ordering(416) 00:08:48.543 fused_ordering(417) 00:08:48.543 fused_ordering(418) 00:08:48.543 fused_ordering(419) 00:08:48.543 fused_ordering(420) 00:08:48.543 fused_ordering(421) 00:08:48.543 fused_ordering(422) 00:08:48.543 fused_ordering(423) 00:08:48.543 fused_ordering(424) 00:08:48.543 fused_ordering(425) 00:08:48.543 fused_ordering(426) 00:08:48.543 fused_ordering(427) 00:08:48.543 fused_ordering(428) 00:08:48.543 fused_ordering(429) 00:08:48.543 fused_ordering(430) 00:08:48.543 fused_ordering(431) 00:08:48.543 fused_ordering(432) 00:08:48.543 fused_ordering(433) 00:08:48.543 fused_ordering(434) 00:08:48.543 fused_ordering(435) 00:08:48.543 fused_ordering(436) 00:08:48.543 fused_ordering(437) 00:08:48.543 fused_ordering(438) 00:08:48.543 fused_ordering(439) 00:08:48.543 fused_ordering(440) 00:08:48.543 fused_ordering(441) 00:08:48.543 fused_ordering(442) 00:08:48.543 fused_ordering(443) 00:08:48.543 fused_ordering(444) 00:08:48.543 fused_ordering(445) 00:08:48.543 fused_ordering(446) 00:08:48.543 fused_ordering(447) 00:08:48.543 fused_ordering(448) 00:08:48.543 fused_ordering(449) 00:08:48.543 fused_ordering(450) 00:08:48.543 fused_ordering(451) 00:08:48.543 fused_ordering(452) 00:08:48.543 fused_ordering(453) 00:08:48.543 fused_ordering(454) 00:08:48.543 fused_ordering(455) 00:08:48.543 fused_ordering(456) 00:08:48.543 fused_ordering(457) 00:08:48.543 fused_ordering(458) 00:08:48.543 fused_ordering(459) 00:08:48.543 fused_ordering(460) 00:08:48.543 fused_ordering(461) 00:08:48.543 fused_ordering(462) 00:08:48.543 fused_ordering(463) 00:08:48.543 fused_ordering(464) 00:08:48.543 fused_ordering(465) 00:08:48.543 fused_ordering(466) 00:08:48.543 fused_ordering(467) 00:08:48.543 fused_ordering(468) 00:08:48.543 fused_ordering(469) 00:08:48.543 fused_ordering(470) 00:08:48.543 fused_ordering(471) 00:08:48.543 fused_ordering(472) 00:08:48.543 fused_ordering(473) 00:08:48.543 fused_ordering(474) 00:08:48.543 fused_ordering(475) 00:08:48.543 fused_ordering(476) 00:08:48.543 fused_ordering(477) 00:08:48.543 fused_ordering(478) 00:08:48.543 fused_ordering(479) 00:08:48.543 fused_ordering(480) 00:08:48.543 fused_ordering(481) 00:08:48.543 fused_ordering(482) 00:08:48.543 fused_ordering(483) 00:08:48.543 fused_ordering(484) 00:08:48.543 fused_ordering(485) 00:08:48.543 fused_ordering(486) 00:08:48.543 fused_ordering(487) 00:08:48.543 fused_ordering(488) 00:08:48.543 fused_ordering(489) 00:08:48.543 fused_ordering(490) 00:08:48.543 fused_ordering(491) 00:08:48.543 fused_ordering(492) 00:08:48.543 fused_ordering(493) 00:08:48.543 fused_ordering(494) 00:08:48.543 fused_ordering(495) 00:08:48.543 fused_ordering(496) 00:08:48.543 fused_ordering(497) 00:08:48.543 fused_ordering(498) 00:08:48.543 fused_ordering(499) 00:08:48.543 fused_ordering(500) 00:08:48.543 fused_ordering(501) 00:08:48.543 fused_ordering(502) 00:08:48.543 fused_ordering(503) 00:08:48.543 fused_ordering(504) 00:08:48.543 fused_ordering(505) 00:08:48.543 fused_ordering(506) 00:08:48.543 fused_ordering(507) 00:08:48.543 fused_ordering(508) 00:08:48.543 fused_ordering(509) 00:08:48.543 fused_ordering(510) 00:08:48.543 fused_ordering(511) 00:08:48.543 fused_ordering(512) 00:08:48.543 fused_ordering(513) 00:08:48.543 fused_ordering(514) 00:08:48.543 fused_ordering(515) 00:08:48.543 fused_ordering(516) 00:08:48.543 fused_ordering(517) 00:08:48.543 fused_ordering(518) 00:08:48.543 fused_ordering(519) 00:08:48.543 fused_ordering(520) 00:08:48.543 fused_ordering(521) 00:08:48.543 fused_ordering(522) 00:08:48.543 fused_ordering(523) 00:08:48.543 fused_ordering(524) 00:08:48.543 fused_ordering(525) 00:08:48.543 fused_ordering(526) 00:08:48.543 fused_ordering(527) 00:08:48.543 fused_ordering(528) 00:08:48.543 fused_ordering(529) 00:08:48.543 fused_ordering(530) 00:08:48.543 fused_ordering(531) 00:08:48.543 fused_ordering(532) 00:08:48.543 fused_ordering(533) 00:08:48.543 fused_ordering(534) 00:08:48.543 fused_ordering(535) 00:08:48.543 fused_ordering(536) 00:08:48.543 fused_ordering(537) 00:08:48.543 fused_ordering(538) 00:08:48.543 fused_ordering(539) 00:08:48.543 fused_ordering(540) 00:08:48.543 fused_ordering(541) 00:08:48.543 fused_ordering(542) 00:08:48.543 fused_ordering(543) 00:08:48.543 fused_ordering(544) 00:08:48.543 fused_ordering(545) 00:08:48.543 fused_ordering(546) 00:08:48.543 fused_ordering(547) 00:08:48.543 fused_ordering(548) 00:08:48.543 fused_ordering(549) 00:08:48.543 fused_ordering(550) 00:08:48.543 fused_ordering(551) 00:08:48.543 fused_ordering(552) 00:08:48.543 fused_ordering(553) 00:08:48.543 fused_ordering(554) 00:08:48.543 fused_ordering(555) 00:08:48.543 fused_ordering(556) 00:08:48.543 fused_ordering(557) 00:08:48.543 fused_ordering(558) 00:08:48.543 fused_ordering(559) 00:08:48.543 fused_ordering(560) 00:08:48.543 fused_ordering(561) 00:08:48.543 fused_ordering(562) 00:08:48.543 fused_ordering(563) 00:08:48.543 fused_ordering(564) 00:08:48.543 fused_ordering(565) 00:08:48.543 fused_ordering(566) 00:08:48.543 fused_ordering(567) 00:08:48.543 fused_ordering(568) 00:08:48.543 fused_ordering(569) 00:08:48.543 fused_ordering(570) 00:08:48.543 fused_ordering(571) 00:08:48.543 fused_ordering(572) 00:08:48.543 fused_ordering(573) 00:08:48.543 fused_ordering(574) 00:08:48.543 fused_ordering(575) 00:08:48.543 fused_ordering(576) 00:08:48.543 fused_ordering(577) 00:08:48.543 fused_ordering(578) 00:08:48.543 fused_ordering(579) 00:08:48.543 fused_ordering(580) 00:08:48.543 fused_ordering(581) 00:08:48.543 fused_ordering(582) 00:08:48.543 fused_ordering(583) 00:08:48.543 fused_ordering(584) 00:08:48.543 fused_ordering(585) 00:08:48.543 fused_ordering(586) 00:08:48.543 fused_ordering(587) 00:08:48.543 fused_ordering(588) 00:08:48.543 fused_ordering(589) 00:08:48.543 fused_ordering(590) 00:08:48.543 fused_ordering(591) 00:08:48.543 fused_ordering(592) 00:08:48.543 fused_ordering(593) 00:08:48.543 fused_ordering(594) 00:08:48.543 fused_ordering(595) 00:08:48.543 fused_ordering(596) 00:08:48.543 fused_ordering(597) 00:08:48.543 fused_ordering(598) 00:08:48.543 fused_ordering(599) 00:08:48.543 fused_ordering(600) 00:08:48.543 fused_ordering(601) 00:08:48.543 fused_ordering(602) 00:08:48.543 fused_ordering(603) 00:08:48.543 fused_ordering(604) 00:08:48.543 fused_ordering(605) 00:08:48.543 fused_ordering(606) 00:08:48.543 fused_ordering(607) 00:08:48.543 fused_ordering(608) 00:08:48.543 fused_ordering(609) 00:08:48.543 fused_ordering(610) 00:08:48.543 fused_ordering(611) 00:08:48.543 fused_ordering(612) 00:08:48.543 fused_ordering(613) 00:08:48.543 fused_ordering(614) 00:08:48.543 fused_ordering(615) 00:08:49.478 fused_ordering(616) 00:08:49.478 fused_ordering(617) 00:08:49.478 fused_ordering(618) 00:08:49.478 fused_ordering(619) 00:08:49.478 fused_ordering(620) 00:08:49.478 fused_ordering(621) 00:08:49.478 fused_ordering(622) 00:08:49.478 fused_ordering(623) 00:08:49.478 fused_ordering(624) 00:08:49.478 fused_ordering(625) 00:08:49.478 fused_ordering(626) 00:08:49.478 fused_ordering(627) 00:08:49.478 fused_ordering(628) 00:08:49.478 fused_ordering(629) 00:08:49.478 fused_ordering(630) 00:08:49.478 fused_ordering(631) 00:08:49.478 fused_ordering(632) 00:08:49.478 fused_ordering(633) 00:08:49.478 fused_ordering(634) 00:08:49.478 fused_ordering(635) 00:08:49.478 fused_ordering(636) 00:08:49.478 fused_ordering(637) 00:08:49.478 fused_ordering(638) 00:08:49.478 fused_ordering(639) 00:08:49.478 fused_ordering(640) 00:08:49.478 fused_ordering(641) 00:08:49.478 fused_ordering(642) 00:08:49.478 fused_ordering(643) 00:08:49.478 fused_ordering(644) 00:08:49.478 fused_ordering(645) 00:08:49.478 fused_ordering(646) 00:08:49.478 fused_ordering(647) 00:08:49.478 fused_ordering(648) 00:08:49.478 fused_ordering(649) 00:08:49.478 fused_ordering(650) 00:08:49.478 fused_ordering(651) 00:08:49.478 fused_ordering(652) 00:08:49.478 fused_ordering(653) 00:08:49.478 fused_ordering(654) 00:08:49.478 fused_ordering(655) 00:08:49.478 fused_ordering(656) 00:08:49.478 fused_ordering(657) 00:08:49.478 fused_ordering(658) 00:08:49.478 fused_ordering(659) 00:08:49.478 fused_ordering(660) 00:08:49.478 fused_ordering(661) 00:08:49.478 fused_ordering(662) 00:08:49.478 fused_ordering(663) 00:08:49.478 fused_ordering(664) 00:08:49.478 fused_ordering(665) 00:08:49.478 fused_ordering(666) 00:08:49.478 fused_ordering(667) 00:08:49.478 fused_ordering(668) 00:08:49.478 fused_ordering(669) 00:08:49.478 fused_ordering(670) 00:08:49.478 fused_ordering(671) 00:08:49.478 fused_ordering(672) 00:08:49.478 fused_ordering(673) 00:08:49.478 fused_ordering(674) 00:08:49.478 fused_ordering(675) 00:08:49.478 fused_ordering(676) 00:08:49.478 fused_ordering(677) 00:08:49.478 fused_ordering(678) 00:08:49.478 fused_ordering(679) 00:08:49.478 fused_ordering(680) 00:08:49.478 fused_ordering(681) 00:08:49.478 fused_ordering(682) 00:08:49.478 fused_ordering(683) 00:08:49.478 fused_ordering(684) 00:08:49.478 fused_ordering(685) 00:08:49.478 fused_ordering(686) 00:08:49.478 fused_ordering(687) 00:08:49.478 fused_ordering(688) 00:08:49.478 fused_ordering(689) 00:08:49.478 fused_ordering(690) 00:08:49.478 fused_ordering(691) 00:08:49.478 fused_ordering(692) 00:08:49.478 fused_ordering(693) 00:08:49.478 fused_ordering(694) 00:08:49.478 fused_ordering(695) 00:08:49.478 fused_ordering(696) 00:08:49.478 fused_ordering(697) 00:08:49.478 fused_ordering(698) 00:08:49.478 fused_ordering(699) 00:08:49.478 fused_ordering(700) 00:08:49.478 fused_ordering(701) 00:08:49.478 fused_ordering(702) 00:08:49.478 fused_ordering(703) 00:08:49.478 fused_ordering(704) 00:08:49.478 fused_ordering(705) 00:08:49.478 fused_ordering(706) 00:08:49.478 fused_ordering(707) 00:08:49.478 fused_ordering(708) 00:08:49.478 fused_ordering(709) 00:08:49.478 fused_ordering(710) 00:08:49.478 fused_ordering(711) 00:08:49.478 fused_ordering(712) 00:08:49.478 fused_ordering(713) 00:08:49.478 fused_ordering(714) 00:08:49.478 fused_ordering(715) 00:08:49.478 fused_ordering(716) 00:08:49.478 fused_ordering(717) 00:08:49.478 fused_ordering(718) 00:08:49.478 fused_ordering(719) 00:08:49.478 fused_ordering(720) 00:08:49.478 fused_ordering(721) 00:08:49.478 fused_ordering(722) 00:08:49.478 fused_ordering(723) 00:08:49.478 fused_ordering(724) 00:08:49.478 fused_ordering(725) 00:08:49.478 fused_ordering(726) 00:08:49.478 fused_ordering(727) 00:08:49.478 fused_ordering(728) 00:08:49.478 fused_ordering(729) 00:08:49.479 fused_ordering(730) 00:08:49.479 fused_ordering(731) 00:08:49.479 fused_ordering(732) 00:08:49.479 fused_ordering(733) 00:08:49.479 fused_ordering(734) 00:08:49.479 fused_ordering(735) 00:08:49.479 fused_ordering(736) 00:08:49.479 fused_ordering(737) 00:08:49.479 fused_ordering(738) 00:08:49.479 fused_ordering(739) 00:08:49.479 fused_ordering(740) 00:08:49.479 fused_ordering(741) 00:08:49.479 fused_ordering(742) 00:08:49.479 fused_ordering(743) 00:08:49.479 fused_ordering(744) 00:08:49.479 fused_ordering(745) 00:08:49.479 fused_ordering(746) 00:08:49.479 fused_ordering(747) 00:08:49.479 fused_ordering(748) 00:08:49.479 fused_ordering(749) 00:08:49.479 fused_ordering(750) 00:08:49.479 fused_ordering(751) 00:08:49.479 fused_ordering(752) 00:08:49.479 fused_ordering(753) 00:08:49.479 fused_ordering(754) 00:08:49.479 fused_ordering(755) 00:08:49.479 fused_ordering(756) 00:08:49.479 fused_ordering(757) 00:08:49.479 fused_ordering(758) 00:08:49.479 fused_ordering(759) 00:08:49.479 fused_ordering(760) 00:08:49.479 fused_ordering(761) 00:08:49.479 fused_ordering(762) 00:08:49.479 fused_ordering(763) 00:08:49.479 fused_ordering(764) 00:08:49.479 fused_ordering(765) 00:08:49.479 fused_ordering(766) 00:08:49.479 fused_ordering(767) 00:08:49.479 fused_ordering(768) 00:08:49.479 fused_ordering(769) 00:08:49.479 fused_ordering(770) 00:08:49.479 fused_ordering(771) 00:08:49.479 fused_ordering(772) 00:08:49.479 fused_ordering(773) 00:08:49.479 fused_ordering(774) 00:08:49.479 fused_ordering(775) 00:08:49.479 fused_ordering(776) 00:08:49.479 fused_ordering(777) 00:08:49.479 fused_ordering(778) 00:08:49.479 fused_ordering(779) 00:08:49.479 fused_ordering(780) 00:08:49.479 fused_ordering(781) 00:08:49.479 fused_ordering(782) 00:08:49.479 fused_ordering(783) 00:08:49.479 fused_ordering(784) 00:08:49.479 fused_ordering(785) 00:08:49.479 fused_ordering(786) 00:08:49.479 fused_ordering(787) 00:08:49.479 fused_ordering(788) 00:08:49.479 fused_ordering(789) 00:08:49.479 fused_ordering(790) 00:08:49.479 fused_ordering(791) 00:08:49.479 fused_ordering(792) 00:08:49.479 fused_ordering(793) 00:08:49.479 fused_ordering(794) 00:08:49.479 fused_ordering(795) 00:08:49.479 fused_ordering(796) 00:08:49.479 fused_ordering(797) 00:08:49.479 fused_ordering(798) 00:08:49.479 fused_ordering(799) 00:08:49.479 fused_ordering(800) 00:08:49.479 fused_ordering(801) 00:08:49.479 fused_ordering(802) 00:08:49.479 fused_ordering(803) 00:08:49.479 fused_ordering(804) 00:08:49.479 fused_ordering(805) 00:08:49.479 fused_ordering(806) 00:08:49.479 fused_ordering(807) 00:08:49.479 fused_ordering(808) 00:08:49.479 fused_ordering(809) 00:08:49.479 fused_ordering(810) 00:08:49.479 fused_ordering(811) 00:08:49.479 fused_ordering(812) 00:08:49.479 fused_ordering(813) 00:08:49.479 fused_ordering(814) 00:08:49.479 fused_ordering(815) 00:08:49.479 fused_ordering(816) 00:08:49.479 fused_ordering(817) 00:08:49.479 fused_ordering(818) 00:08:49.479 fused_ordering(819) 00:08:49.479 fused_ordering(820) 00:08:50.414 fused_ordering(821) 00:08:50.414 fused_ordering(822) 00:08:50.414 fused_ordering(823) 00:08:50.414 fused_ordering(824) 00:08:50.414 fused_ordering(825) 00:08:50.414 fused_ordering(826) 00:08:50.414 fused_ordering(827) 00:08:50.414 fused_ordering(828) 00:08:50.414 fused_ordering(829) 00:08:50.414 fused_ordering(830) 00:08:50.414 fused_ordering(831) 00:08:50.414 fused_ordering(832) 00:08:50.414 fused_ordering(833) 00:08:50.414 fused_ordering(834) 00:08:50.414 fused_ordering(835) 00:08:50.414 fused_ordering(836) 00:08:50.414 fused_ordering(837) 00:08:50.414 fused_ordering(838) 00:08:50.414 fused_ordering(839) 00:08:50.414 fused_ordering(840) 00:08:50.414 fused_ordering(841) 00:08:50.414 fused_ordering(842) 00:08:50.414 fused_ordering(843) 00:08:50.414 fused_ordering(844) 00:08:50.414 fused_ordering(845) 00:08:50.414 fused_ordering(846) 00:08:50.414 fused_ordering(847) 00:08:50.414 fused_ordering(848) 00:08:50.414 fused_ordering(849) 00:08:50.414 fused_ordering(850) 00:08:50.414 fused_ordering(851) 00:08:50.414 fused_ordering(852) 00:08:50.414 fused_ordering(853) 00:08:50.414 fused_ordering(854) 00:08:50.414 fused_ordering(855) 00:08:50.414 fused_ordering(856) 00:08:50.414 fused_ordering(857) 00:08:50.414 fused_ordering(858) 00:08:50.414 fused_ordering(859) 00:08:50.414 fused_ordering(860) 00:08:50.414 fused_ordering(861) 00:08:50.414 fused_ordering(862) 00:08:50.414 fused_ordering(863) 00:08:50.414 fused_ordering(864) 00:08:50.414 fused_ordering(865) 00:08:50.414 fused_ordering(866) 00:08:50.414 fused_ordering(867) 00:08:50.414 fused_ordering(868) 00:08:50.414 fused_ordering(869) 00:08:50.414 fused_ordering(870) 00:08:50.414 fused_ordering(871) 00:08:50.414 fused_ordering(872) 00:08:50.414 fused_ordering(873) 00:08:50.414 fused_ordering(874) 00:08:50.414 fused_ordering(875) 00:08:50.414 fused_ordering(876) 00:08:50.414 fused_ordering(877) 00:08:50.414 fused_ordering(878) 00:08:50.414 fused_ordering(879) 00:08:50.414 fused_ordering(880) 00:08:50.414 fused_ordering(881) 00:08:50.414 fused_ordering(882) 00:08:50.415 fused_ordering(883) 00:08:50.415 fused_ordering(884) 00:08:50.415 fused_ordering(885) 00:08:50.415 fused_ordering(886) 00:08:50.415 fused_ordering(887) 00:08:50.415 fused_ordering(888) 00:08:50.415 fused_ordering(889) 00:08:50.415 fused_ordering(890) 00:08:50.415 fused_ordering(891) 00:08:50.415 fused_ordering(892) 00:08:50.415 fused_ordering(893) 00:08:50.415 fused_ordering(894) 00:08:50.415 fused_ordering(895) 00:08:50.415 fused_ordering(896) 00:08:50.415 fused_ordering(897) 00:08:50.415 fused_ordering(898) 00:08:50.415 fused_ordering(899) 00:08:50.415 fused_ordering(900) 00:08:50.415 fused_ordering(901) 00:08:50.415 fused_ordering(902) 00:08:50.415 fused_ordering(903) 00:08:50.415 fused_ordering(904) 00:08:50.415 fused_ordering(905) 00:08:50.415 fused_ordering(906) 00:08:50.415 fused_ordering(907) 00:08:50.415 fused_ordering(908) 00:08:50.415 fused_ordering(909) 00:08:50.415 fused_ordering(910) 00:08:50.415 fused_ordering(911) 00:08:50.415 fused_ordering(912) 00:08:50.415 fused_ordering(913) 00:08:50.415 fused_ordering(914) 00:08:50.415 fused_ordering(915) 00:08:50.415 fused_ordering(916) 00:08:50.415 fused_ordering(917) 00:08:50.415 fused_ordering(918) 00:08:50.415 fused_ordering(919) 00:08:50.415 fused_ordering(920) 00:08:50.415 fused_ordering(921) 00:08:50.415 fused_ordering(922) 00:08:50.415 fused_ordering(923) 00:08:50.415 fused_ordering(924) 00:08:50.415 fused_ordering(925) 00:08:50.415 fused_ordering(926) 00:08:50.415 fused_ordering(927) 00:08:50.415 fused_ordering(928) 00:08:50.415 fused_ordering(929) 00:08:50.415 fused_ordering(930) 00:08:50.415 fused_ordering(931) 00:08:50.415 fused_ordering(932) 00:08:50.415 fused_ordering(933) 00:08:50.415 fused_ordering(934) 00:08:50.415 fused_ordering(935) 00:08:50.415 fused_ordering(936) 00:08:50.415 fused_ordering(937) 00:08:50.415 fused_ordering(938) 00:08:50.415 fused_ordering(939) 00:08:50.415 fused_ordering(940) 00:08:50.415 fused_ordering(941) 00:08:50.415 fused_ordering(942) 00:08:50.415 fused_ordering(943) 00:08:50.415 fused_ordering(944) 00:08:50.415 fused_ordering(945) 00:08:50.415 fused_ordering(946) 00:08:50.415 fused_ordering(947) 00:08:50.415 fused_ordering(948) 00:08:50.415 fused_ordering(949) 00:08:50.415 fused_ordering(950) 00:08:50.415 fused_ordering(951) 00:08:50.415 fused_ordering(952) 00:08:50.415 fused_ordering(953) 00:08:50.415 fused_ordering(954) 00:08:50.415 fused_ordering(955) 00:08:50.415 fused_ordering(956) 00:08:50.415 fused_ordering(957) 00:08:50.415 fused_ordering(958) 00:08:50.415 fused_ordering(959) 00:08:50.415 fused_ordering(960) 00:08:50.415 fused_ordering(961) 00:08:50.415 fused_ordering(962) 00:08:50.415 fused_ordering(963) 00:08:50.415 fused_ordering(964) 00:08:50.415 fused_ordering(965) 00:08:50.415 fused_ordering(966) 00:08:50.415 fused_ordering(967) 00:08:50.415 fused_ordering(968) 00:08:50.415 fused_ordering(969) 00:08:50.415 fused_ordering(970) 00:08:50.415 fused_ordering(971) 00:08:50.415 fused_ordering(972) 00:08:50.415 fused_ordering(973) 00:08:50.415 fused_ordering(974) 00:08:50.415 fused_ordering(975) 00:08:50.415 fused_ordering(976) 00:08:50.415 fused_ordering(977) 00:08:50.415 fused_ordering(978) 00:08:50.415 fused_ordering(979) 00:08:50.415 fused_ordering(980) 00:08:50.415 fused_ordering(981) 00:08:50.415 fused_ordering(982) 00:08:50.415 fused_ordering(983) 00:08:50.415 fused_ordering(984) 00:08:50.415 fused_ordering(985) 00:08:50.415 fused_ordering(986) 00:08:50.415 fused_ordering(987) 00:08:50.415 fused_ordering(988) 00:08:50.415 fused_ordering(989) 00:08:50.415 fused_ordering(990) 00:08:50.415 fused_ordering(991) 00:08:50.415 fused_ordering(992) 00:08:50.415 fused_ordering(993) 00:08:50.415 fused_ordering(994) 00:08:50.415 fused_ordering(995) 00:08:50.415 fused_ordering(996) 00:08:50.415 fused_ordering(997) 00:08:50.415 fused_ordering(998) 00:08:50.415 fused_ordering(999) 00:08:50.415 fused_ordering(1000) 00:08:50.415 fused_ordering(1001) 00:08:50.415 fused_ordering(1002) 00:08:50.415 fused_ordering(1003) 00:08:50.415 fused_ordering(1004) 00:08:50.415 fused_ordering(1005) 00:08:50.415 fused_ordering(1006) 00:08:50.415 fused_ordering(1007) 00:08:50.415 fused_ordering(1008) 00:08:50.415 fused_ordering(1009) 00:08:50.415 fused_ordering(1010) 00:08:50.415 fused_ordering(1011) 00:08:50.415 fused_ordering(1012) 00:08:50.415 fused_ordering(1013) 00:08:50.415 fused_ordering(1014) 00:08:50.415 fused_ordering(1015) 00:08:50.415 fused_ordering(1016) 00:08:50.415 fused_ordering(1017) 00:08:50.415 fused_ordering(1018) 00:08:50.415 fused_ordering(1019) 00:08:50.415 fused_ordering(1020) 00:08:50.415 fused_ordering(1021) 00:08:50.415 fused_ordering(1022) 00:08:50.415 fused_ordering(1023) 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.415 rmmod nvme_tcp 00:08:50.415 rmmod nvme_fabrics 00:08:50.415 rmmod nvme_keyring 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2730339 ']' 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2730339 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@946 -- # '[' -z 2730339 ']' 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # kill -0 2730339 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # uname 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2730339 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2730339' 00:08:50.415 killing process with pid 2730339 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # kill 2730339 00:08:50.415 [2024-05-15 10:49:06.518492] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:08:50.415 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # wait 2730339 00:08:50.674 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:50.674 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:50.674 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:50.674 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.674 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:50.674 10:49:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.674 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.674 10:49:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.210 10:49:08 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.210 00:08:53.210 real 0m10.456s 00:08:53.210 user 0m7.883s 00:08:53.210 sys 0m5.354s 00:08:53.210 10:49:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # xtrace_disable 00:08:53.210 10:49:08 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:08:53.210 ************************************ 00:08:53.210 END TEST nvmf_fused_ordering 00:08:53.210 ************************************ 00:08:53.210 10:49:08 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:53.210 10:49:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:08:53.210 10:49:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:08:53.210 10:49:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:53.210 ************************************ 00:08:53.210 START TEST nvmf_delete_subsystem 00:08:53.210 ************************************ 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:53.210 * Looking for test storage... 00:08:53.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.210 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.211 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:53.211 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:53.211 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.211 10:49:08 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:55.743 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:55.743 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:55.743 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:55.743 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:55.743 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:55.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:55.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:08:55.744 00:08:55.744 --- 10.0.0.2 ping statistics --- 00:08:55.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.744 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:55.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:55.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:08:55.744 00:08:55.744 --- 10.0.0.1 ping statistics --- 00:08:55.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:55.744 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@720 -- # xtrace_disable 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2733367 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2733367 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@827 -- # '[' -z 2733367 ']' 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local max_retries=100 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # xtrace_disable 00:08:55.744 10:49:11 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.744 [2024-05-15 10:49:11.756675] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:08:55.744 [2024-05-15 10:49:11.756761] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:55.744 EAL: No free 2048 kB hugepages reported on node 1 00:08:55.744 [2024-05-15 10:49:11.837503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:55.744 [2024-05-15 10:49:11.954123] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:55.744 [2024-05-15 10:49:11.954187] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:55.744 [2024-05-15 10:49:11.954204] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:55.744 [2024-05-15 10:49:11.954217] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:55.744 [2024-05-15 10:49:11.954229] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:55.744 [2024-05-15 10:49:11.954296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:55.744 [2024-05-15 10:49:11.954302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # return 0 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.677 [2024-05-15 10:49:12.779660] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.677 [2024-05-15 10:49:12.795764] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:08:56.677 [2024-05-15 10:49:12.796064] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.677 NULL1 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.677 Delay0 00:08:56.677 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.678 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.678 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:56.678 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.678 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:56.678 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2733520 00:08:56.678 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:56.678 10:49:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:56.678 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.678 [2024-05-15 10:49:12.870723] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:59.203 10:49:14 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.203 10:49:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.203 10:49:14 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Write completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 Read completed with error (sct=0, sc=8) 00:08:59.203 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 Read completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 Write completed with error (sct=0, sc=8) 00:08:59.204 starting I/O failed: -6 00:08:59.204 starting I/O failed: -6 00:08:59.204 starting I/O failed: -6 00:08:59.204 starting I/O failed: -6 00:08:59.204 starting I/O failed: -6 00:08:59.204 starting I/O failed: -6 00:08:59.204 starting I/O failed: -6 00:08:59.769 [2024-05-15 10:49:15.973503] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c57f0 is same with the state(5) to be set 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 [2024-05-15 10:49:16.004947] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20cc790 is same with the state(5) to be set 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Write completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.027 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 [2024-05-15 10:49:16.005237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a6880 is same with the state(5) to be set 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 [2024-05-15 10:49:16.005736] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe3a800c600 is same with the state(5) to be set 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Write completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 Read completed with error (sct=0, sc=8) 00:09:00.028 [2024-05-15 10:49:16.006030] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe3a800bfe0 is same with the state(5) to be set 00:09:00.028 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.028 Initializing NVMe Controllers 00:09:00.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.028 Controller IO queue size 128, less than required. 00:09:00.028 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:00.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:00.028 Initialization complete. Launching workers. 00:09:00.028 ======================================================== 00:09:00.028 Latency(us) 00:09:00.028 Device Information : IOPS MiB/s Average min max 00:09:00.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.94 0.09 906748.60 767.80 1013531.84 00:09:00.028 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 191.40 0.09 891370.72 873.30 1013191.48 00:09:00.028 ======================================================== 00:09:00.028 Total : 377.34 0.18 898948.52 767.80 1013531.84 00:09:00.028 00:09:00.028 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:09:00.028 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2733520 00:09:00.028 [2024-05-15 10:49:16.006958] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20c57f0 (9): Bad file descriptor 00:09:00.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:09:00.028 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2733520 00:09:00.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2733520) - No such process 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2733520 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 2733520 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 2733520 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.286 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.544 [2024-05-15 10:49:16.528133] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2733933 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2733933 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.544 10:49:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:09:00.544 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.544 [2024-05-15 10:49:16.584283] subsystem.c:1520:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:09:01.108 10:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.108 10:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2733933 00:09:01.108 10:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.366 10:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.366 10:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2733933 00:09:01.366 10:49:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:01.931 10:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:01.931 10:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2733933 00:09:01.931 10:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:02.497 10:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:02.497 10:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2733933 00:09:02.497 10:49:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.063 10:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.063 10:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2733933 00:09:03.063 10:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.628 10:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.628 10:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2733933 00:09:03.628 10:49:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:03.916 Initializing NVMe Controllers 00:09:03.916 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:03.916 Controller IO queue size 128, less than required. 00:09:03.916 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:03.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:03.916 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:03.916 Initialization complete. Launching workers. 00:09:03.916 ======================================================== 00:09:03.916 Latency(us) 00:09:03.916 Device Information : IOPS MiB/s Average min max 00:09:03.916 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004148.37 1000288.95 1010977.25 00:09:03.916 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004652.34 1000387.24 1013106.70 00:09:03.916 ======================================================== 00:09:03.916 Total : 256.00 0.12 1004400.35 1000288.95 1013106.70 00:09:03.916 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2733933 00:09:03.916 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2733933) - No such process 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2733933 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:03.916 rmmod nvme_tcp 00:09:03.916 rmmod nvme_fabrics 00:09:03.916 rmmod nvme_keyring 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2733367 ']' 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2733367 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@946 -- # '[' -z 2733367 ']' 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # kill -0 2733367 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # uname 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2733367 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2733367' 00:09:03.916 killing process with pid 2733367 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # kill 2733367 00:09:03.916 [2024-05-15 10:49:20.145817] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:03.916 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # wait 2733367 00:09:04.174 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:04.174 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:04.174 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:04.174 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:04.174 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:04.174 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.174 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:04.174 10:49:20 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.743 10:49:22 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:06.743 00:09:06.743 real 0m13.544s 00:09:06.743 user 0m29.592s 00:09:06.743 sys 0m3.404s 00:09:06.743 10:49:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:06.743 10:49:22 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:06.743 ************************************ 00:09:06.743 END TEST nvmf_delete_subsystem 00:09:06.743 ************************************ 00:09:06.743 10:49:22 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:09:06.743 10:49:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:06.743 10:49:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:06.743 10:49:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:06.743 ************************************ 00:09:06.743 START TEST nvmf_ns_masking 00:09:06.743 ************************************ 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1121 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:09:06.743 * Looking for test storage... 00:09:06.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=38b938d2-1b1a-4fe7-8ed6-91d48d7e891b 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:09:06.743 10:49:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:09.273 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:09.273 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:09.273 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:09.274 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:09.274 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:09.274 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.274 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:09:09.274 00:09:09.274 --- 10.0.0.2 ping statistics --- 00:09:09.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.274 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:09.274 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.274 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:09:09.274 00:09:09.274 --- 10.0.0.1 ping statistics --- 00:09:09.274 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.274 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2736678 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2736678 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@827 -- # '[' -z 2736678 ']' 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:09.274 10:49:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:09.274 [2024-05-15 10:49:25.248723] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:09.274 [2024-05-15 10:49:25.248806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.274 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.274 [2024-05-15 10:49:25.329715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.274 [2024-05-15 10:49:25.450318] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.274 [2024-05-15 10:49:25.450381] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.274 [2024-05-15 10:49:25.450397] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.274 [2024-05-15 10:49:25.450411] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.274 [2024-05-15 10:49:25.450422] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.274 [2024-05-15 10:49:25.450502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.274 [2024-05-15 10:49:25.450554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.274 [2024-05-15 10:49:25.450670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.274 [2024-05-15 10:49:25.450672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.209 10:49:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:10.209 10:49:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@860 -- # return 0 00:09:10.209 10:49:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:10.209 10:49:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.209 10:49:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:10.209 10:49:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.209 10:49:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:10.466 [2024-05-15 10:49:26.521572] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.466 10:49:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:09:10.466 10:49:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:09:10.466 10:49:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:10.725 Malloc1 00:09:10.725 10:49:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:10.983 Malloc2 00:09:10.983 10:49:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:11.241 10:49:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:09:11.498 10:49:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:11.756 [2024-05-15 10:49:27.783359] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:11.756 [2024-05-15 10:49:27.783689] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.756 10:49:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:09:11.756 10:49:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38b938d2-1b1a-4fe7-8ed6-91d48d7e891b -a 10.0.0.2 -s 4420 -i 4 00:09:11.756 10:49:27 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:09:11.756 10:49:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:11.756 10:49:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:11.756 10:49:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:09:11.756 10:49:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:14.285 10:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:14.285 10:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:14.285 10:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:14.285 10:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:14.285 10:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:14.285 10:49:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:14.285 10:49:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:14.285 10:49:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:14.285 [ 0]:0x1 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6004ac22e608400790d0cd13ea618e9a 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6004ac22e608400790d0cd13ea618e9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:14.285 [ 0]:0x1 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6004ac22e608400790d0cd13ea618e9a 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6004ac22e608400790d0cd13ea618e9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:14.285 [ 1]:0x2 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22c16d368e8949fcb966ca263153c5a2 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22c16d368e8949fcb966ca263153c5a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:09:14.285 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:14.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.543 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:14.800 10:49:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:09:15.058 10:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:09:15.058 10:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38b938d2-1b1a-4fe7-8ed6-91d48d7e891b -a 10.0.0.2 -s 4420 -i 4 00:09:15.058 10:49:31 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:09:15.058 10:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:15.058 10:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:15.058 10:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 1 ]] 00:09:15.058 10:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=1 00:09:15.058 10:49:31 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:17.596 [ 0]:0x2 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22c16d368e8949fcb966ca263153c5a2 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22c16d368e8949fcb966ca263153c5a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:17.596 [ 0]:0x1 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6004ac22e608400790d0cd13ea618e9a 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6004ac22e608400790d0cd13ea618e9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:17.596 [ 1]:0x2 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22c16d368e8949fcb966ca263153c5a2 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22c16d368e8949fcb966ca263153c5a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:17.596 10:49:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:17.853 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:18.110 [ 0]:0x2 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22c16d368e8949fcb966ca263153c5a2 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22c16d368e8949fcb966ca263153c5a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:18.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.110 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:18.368 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:09:18.368 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 38b938d2-1b1a-4fe7-8ed6-91d48d7e891b -a 10.0.0.2 -s 4420 -i 4 00:09:18.368 10:49:34 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:18.368 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1194 -- # local i=0 00:09:18.368 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:18.368 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:18.368 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:18.368 10:49:34 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # sleep 2 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # return 0 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:20.894 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:20.894 [ 0]:0x1 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=6004ac22e608400790d0cd13ea618e9a 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 6004ac22e608400790d0cd13ea618e9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:20.895 [ 1]:0x2 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22c16d368e8949fcb966ca263153c5a2 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22c16d368e8949fcb966ca263153c5a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:20.895 10:49:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:20.895 [ 0]:0x2 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22c16d368e8949fcb966ca263153c5a2 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22c16d368e8949fcb966ca263153c5a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:20.895 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:09:21.153 [2024-05-15 10:49:37.311959] nvmf_rpc.c:1776:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:09:21.153 request: 00:09:21.153 { 00:09:21.153 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:21.153 "nsid": 2, 00:09:21.153 "host": "nqn.2016-06.io.spdk:host1", 00:09:21.153 "method": "nvmf_ns_remove_host", 00:09:21.153 "req_id": 1 00:09:21.153 } 00:09:21.153 Got JSON-RPC error response 00:09:21.153 response: 00:09:21.153 { 00:09:21.153 "code": -32602, 00:09:21.153 "message": "Invalid parameters" 00:09:21.153 } 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:09:21.153 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:09:21.411 [ 0]:0x2 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=22c16d368e8949fcb966ca263153c5a2 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 22c16d368e8949fcb966ca263153c5a2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:21.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.411 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:21.688 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:09:21.688 10:49:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:09:21.689 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:21.689 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:09:21.689 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:21.689 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:09:21.689 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:21.689 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:21.689 rmmod nvme_tcp 00:09:21.689 rmmod nvme_fabrics 00:09:21.980 rmmod nvme_keyring 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2736678 ']' 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2736678 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@946 -- # '[' -z 2736678 ']' 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@950 -- # kill -0 2736678 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # uname 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2736678 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2736678' 00:09:21.980 killing process with pid 2736678 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # kill 2736678 00:09:21.980 [2024-05-15 10:49:37.973149] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:21.980 10:49:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@970 -- # wait 2736678 00:09:22.239 10:49:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:22.239 10:49:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:22.239 10:49:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:22.239 10:49:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:22.239 10:49:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:22.239 10:49:38 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.239 10:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.239 10:49:38 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.147 10:49:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:24.147 00:09:24.147 real 0m17.855s 00:09:24.147 user 0m54.827s 00:09:24.147 sys 0m4.139s 00:09:24.147 10:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:24.147 10:49:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:09:24.147 ************************************ 00:09:24.147 END TEST nvmf_ns_masking 00:09:24.147 ************************************ 00:09:24.406 10:49:40 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:09:24.406 10:49:40 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:24.406 10:49:40 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:24.406 10:49:40 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:24.406 10:49:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:24.406 ************************************ 00:09:24.406 START TEST nvmf_nvme_cli 00:09:24.406 ************************************ 00:09:24.406 10:49:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:09:24.406 * Looking for test storage... 00:09:24.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:09:24.407 10:49:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:26.939 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:26.939 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:26.939 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:26.939 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:26.939 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:26.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:26.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:09:26.940 00:09:26.940 --- 10.0.0.2 ping statistics --- 00:09:26.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.940 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:26.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:26.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:09:26.940 00:09:26.940 --- 10.0.0.1 ping statistics --- 00:09:26.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:26.940 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:26.940 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@720 -- # xtrace_disable 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2740633 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2740633 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@827 -- # '[' -z 2740633 ']' 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:27.197 10:49:43 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:27.197 [2024-05-15 10:49:43.229115] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:27.197 [2024-05-15 10:49:43.229193] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.197 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.197 [2024-05-15 10:49:43.305568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.197 [2024-05-15 10:49:43.417859] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.197 [2024-05-15 10:49:43.417912] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.197 [2024-05-15 10:49:43.417948] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:27.197 [2024-05-15 10:49:43.417960] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:27.197 [2024-05-15 10:49:43.417970] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.197 [2024-05-15 10:49:43.418030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.197 [2024-05-15 10:49:43.418088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.197 [2024-05-15 10:49:43.421965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.197 [2024-05-15 10:49:43.421970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # return 0 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 [2024-05-15 10:49:44.219895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 Malloc0 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 Malloc1 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 [2024-05-15 10:49:44.302521] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:28.131 [2024-05-15 10:49:44.302799] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:28.131 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:09:28.389 00:09:28.389 Discovery Log Number of Records 2, Generation counter 2 00:09:28.389 =====Discovery Log Entry 0====== 00:09:28.389 trtype: tcp 00:09:28.389 adrfam: ipv4 00:09:28.389 subtype: current discovery subsystem 00:09:28.389 treq: not required 00:09:28.389 portid: 0 00:09:28.389 trsvcid: 4420 00:09:28.389 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:28.389 traddr: 10.0.0.2 00:09:28.389 eflags: explicit discovery connections, duplicate discovery information 00:09:28.389 sectype: none 00:09:28.389 =====Discovery Log Entry 1====== 00:09:28.389 trtype: tcp 00:09:28.389 adrfam: ipv4 00:09:28.389 subtype: nvme subsystem 00:09:28.389 treq: not required 00:09:28.389 portid: 0 00:09:28.389 trsvcid: 4420 00:09:28.389 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:28.389 traddr: 10.0.0.2 00:09:28.389 eflags: none 00:09:28.389 sectype: none 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:09:28.389 10:49:44 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:28.954 10:49:45 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:09:28.954 10:49:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1194 -- # local i=0 00:09:28.955 10:49:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:09:28.955 10:49:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # [[ -n 2 ]] 00:09:28.955 10:49:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # nvme_device_counter=2 00:09:28.955 10:49:45 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # sleep 2 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_devices=2 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # return 0 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:09:30.855 /dev/nvme0n1 ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:09:30.855 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:31.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1215 -- # local i=0 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # return 0 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:31.113 rmmod nvme_tcp 00:09:31.113 rmmod nvme_fabrics 00:09:31.113 rmmod nvme_keyring 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2740633 ']' 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2740633 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@946 -- # '[' -z 2740633 ']' 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # kill -0 2740633 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # uname 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2740633 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2740633' 00:09:31.113 killing process with pid 2740633 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@965 -- # kill 2740633 00:09:31.113 [2024-05-15 10:49:47.258908] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:09:31.113 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # wait 2740633 00:09:31.372 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:31.372 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:31.372 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:31.372 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:31.372 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:31.372 10:49:47 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:31.372 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:31.372 10:49:47 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:33.913 10:49:49 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:33.913 00:09:33.913 real 0m9.227s 00:09:33.913 user 0m17.345s 00:09:33.913 sys 0m2.565s 00:09:33.913 10:49:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1122 -- # xtrace_disable 00:09:33.913 10:49:49 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:09:33.913 ************************************ 00:09:33.913 END TEST nvmf_nvme_cli 00:09:33.913 ************************************ 00:09:33.913 10:49:49 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:09:33.913 10:49:49 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:33.913 10:49:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:09:33.913 10:49:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:09:33.913 10:49:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:33.913 ************************************ 00:09:33.913 START TEST nvmf_vfio_user 00:09:33.913 ************************************ 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:09:33.913 * Looking for test storage... 00:09:33.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2741476 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2741476' 00:09:33.913 Process pid: 2741476 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2741476 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2741476 ']' 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:09:33.913 10:49:49 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:09:33.913 [2024-05-15 10:49:49.805295] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:33.914 [2024-05-15 10:49:49.805380] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.914 EAL: No free 2048 kB hugepages reported on node 1 00:09:33.914 [2024-05-15 10:49:49.875673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.914 [2024-05-15 10:49:49.989410] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.914 [2024-05-15 10:49:49.989469] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.914 [2024-05-15 10:49:49.989485] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.914 [2024-05-15 10:49:49.989499] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.914 [2024-05-15 10:49:49.989511] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.914 [2024-05-15 10:49:49.989599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.914 [2024-05-15 10:49:49.989650] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.914 [2024-05-15 10:49:49.989764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.914 [2024-05-15 10:49:49.989766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.914 10:49:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:09:33.914 10:49:50 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:09:33.914 10:49:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:09:35.285 10:49:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:09:35.285 10:49:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:09:35.285 10:49:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:09:35.285 10:49:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:35.285 10:49:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:09:35.285 10:49:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:09:35.542 Malloc1 00:09:35.542 10:49:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:09:36.106 10:49:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:09:36.106 10:49:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:09:36.364 [2024-05-15 10:49:52.515802] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:09:36.364 10:49:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:36.364 10:49:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:09:36.364 10:49:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:09:36.622 Malloc2 00:09:36.622 10:49:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:09:36.879 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:09:37.136 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:09:37.396 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:09:37.396 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:09:37.396 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:37.396 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:37.396 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:09:37.396 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:37.396 [2024-05-15 10:49:53.548114] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:37.396 [2024-05-15 10:49:53.548157] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2742014 ] 00:09:37.396 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.396 [2024-05-15 10:49:53.580148] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:09:37.396 [2024-05-15 10:49:53.585653] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:37.396 [2024-05-15 10:49:53.585680] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f00ae37c000 00:09:37.396 [2024-05-15 10:49:53.586650] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.396 [2024-05-15 10:49:53.587646] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.396 [2024-05-15 10:49:53.588652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.396 [2024-05-15 10:49:53.589652] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:37.396 [2024-05-15 10:49:53.590659] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:37.396 [2024-05-15 10:49:53.591663] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.396 [2024-05-15 10:49:53.592669] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:37.396 [2024-05-15 10:49:53.593670] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:37.396 [2024-05-15 10:49:53.594679] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:37.396 [2024-05-15 10:49:53.594704] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f00ae371000 00:09:37.396 [2024-05-15 10:49:53.595828] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:37.396 [2024-05-15 10:49:53.611514] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:09:37.396 [2024-05-15 10:49:53.611550] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:09:37.396 [2024-05-15 10:49:53.613796] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:37.396 [2024-05-15 10:49:53.613853] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:37.396 [2024-05-15 10:49:53.613979] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:09:37.396 [2024-05-15 10:49:53.614009] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:09:37.396 [2024-05-15 10:49:53.614021] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:09:37.396 [2024-05-15 10:49:53.614788] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:09:37.396 [2024-05-15 10:49:53.614807] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:09:37.396 [2024-05-15 10:49:53.614820] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:09:37.396 [2024-05-15 10:49:53.615794] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:09:37.396 [2024-05-15 10:49:53.615815] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:09:37.396 [2024-05-15 10:49:53.615829] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:09:37.396 [2024-05-15 10:49:53.618940] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:09:37.396 [2024-05-15 10:49:53.618961] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:37.396 [2024-05-15 10:49:53.619812] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:09:37.396 [2024-05-15 10:49:53.619831] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:09:37.396 [2024-05-15 10:49:53.619840] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:09:37.396 [2024-05-15 10:49:53.619852] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:37.396 [2024-05-15 10:49:53.619966] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:09:37.396 [2024-05-15 10:49:53.619977] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:37.396 [2024-05-15 10:49:53.619986] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:09:37.396 [2024-05-15 10:49:53.620820] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:09:37.396 [2024-05-15 10:49:53.621826] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:09:37.396 [2024-05-15 10:49:53.622843] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:37.396 [2024-05-15 10:49:53.623837] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:37.396 [2024-05-15 10:49:53.624015] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:37.396 [2024-05-15 10:49:53.624856] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:09:37.396 [2024-05-15 10:49:53.624874] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:37.396 [2024-05-15 10:49:53.624884] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:09:37.396 [2024-05-15 10:49:53.624910] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:09:37.396 [2024-05-15 10:49:53.624928] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:09:37.396 [2024-05-15 10:49:53.624965] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:37.396 [2024-05-15 10:49:53.624977] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:37.396 [2024-05-15 10:49:53.624999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:37.396 [2024-05-15 10:49:53.625057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:37.396 [2024-05-15 10:49:53.625075] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:09:37.396 [2024-05-15 10:49:53.625084] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:09:37.396 [2024-05-15 10:49:53.625091] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:09:37.396 [2024-05-15 10:49:53.625099] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:37.396 [2024-05-15 10:49:53.625106] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:09:37.396 [2024-05-15 10:49:53.625115] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:09:37.396 [2024-05-15 10:49:53.625123] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:09:37.396 [2024-05-15 10:49:53.625140] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:09:37.396 [2024-05-15 10:49:53.625160] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:37.396 [2024-05-15 10:49:53.625180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:37.396 [2024-05-15 10:49:53.625198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:37.396 [2024-05-15 10:49:53.625212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:37.396 [2024-05-15 10:49:53.625239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:37.396 [2024-05-15 10:49:53.625252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:37.396 [2024-05-15 10:49:53.625260] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:09:37.396 [2024-05-15 10:49:53.625276] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:37.396 [2024-05-15 10:49:53.625290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625313] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:09:37.397 [2024-05-15 10:49:53.625321] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625345] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625440] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625456] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625470] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:37.397 [2024-05-15 10:49:53.625478] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:37.397 [2024-05-15 10:49:53.625487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625523] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:09:37.397 [2024-05-15 10:49:53.625544] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625558] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625570] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:37.397 [2024-05-15 10:49:53.625582] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:37.397 [2024-05-15 10:49:53.625591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625637] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625650] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625662] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:37.397 [2024-05-15 10:49:53.625670] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:37.397 [2024-05-15 10:49:53.625679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625709] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625721] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625734] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625744] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625752] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625761] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:09:37.397 [2024-05-15 10:49:53.625768] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:09:37.397 [2024-05-15 10:49:53.625792] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:09:37.397 [2024-05-15 10:49:53.625821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625873] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625901] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.625952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.625978] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:37.397 [2024-05-15 10:49:53.625990] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:37.397 [2024-05-15 10:49:53.625996] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:37.397 [2024-05-15 10:49:53.626002] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:37.397 [2024-05-15 10:49:53.626012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:37.397 [2024-05-15 10:49:53.626025] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:37.397 [2024-05-15 10:49:53.626034] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:37.397 [2024-05-15 10:49:53.626042] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.626055] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:37.397 [2024-05-15 10:49:53.626066] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:37.397 [2024-05-15 10:49:53.626076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.626094] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:37.397 [2024-05-15 10:49:53.626103] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:37.397 [2024-05-15 10:49:53.626112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:37.397 [2024-05-15 10:49:53.626125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.626145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.626163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:37.397 [2024-05-15 10:49:53.626179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:37.397 ===================================================== 00:09:37.397 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:37.397 ===================================================== 00:09:37.397 Controller Capabilities/Features 00:09:37.397 ================================ 00:09:37.397 Vendor ID: 4e58 00:09:37.397 Subsystem Vendor ID: 4e58 00:09:37.397 Serial Number: SPDK1 00:09:37.397 Model Number: SPDK bdev Controller 00:09:37.397 Firmware Version: 24.05 00:09:37.397 Recommended Arb Burst: 6 00:09:37.397 IEEE OUI Identifier: 8d 6b 50 00:09:37.397 Multi-path I/O 00:09:37.397 May have multiple subsystem ports: Yes 00:09:37.397 May have multiple controllers: Yes 00:09:37.397 Associated with SR-IOV VF: No 00:09:37.397 Max Data Transfer Size: 131072 00:09:37.397 Max Number of Namespaces: 32 00:09:37.397 Max Number of I/O Queues: 127 00:09:37.397 NVMe Specification Version (VS): 1.3 00:09:37.397 NVMe Specification Version (Identify): 1.3 00:09:37.397 Maximum Queue Entries: 256 00:09:37.397 Contiguous Queues Required: Yes 00:09:37.397 Arbitration Mechanisms Supported 00:09:37.397 Weighted Round Robin: Not Supported 00:09:37.397 Vendor Specific: Not Supported 00:09:37.397 Reset Timeout: 15000 ms 00:09:37.397 Doorbell Stride: 4 bytes 00:09:37.397 NVM Subsystem Reset: Not Supported 00:09:37.397 Command Sets Supported 00:09:37.397 NVM Command Set: Supported 00:09:37.397 Boot Partition: Not Supported 00:09:37.397 Memory Page Size Minimum: 4096 bytes 00:09:37.397 Memory Page Size Maximum: 4096 bytes 00:09:37.397 Persistent Memory Region: Not Supported 00:09:37.397 Optional Asynchronous Events Supported 00:09:37.397 Namespace Attribute Notices: Supported 00:09:37.397 Firmware Activation Notices: Not Supported 00:09:37.397 ANA Change Notices: Not Supported 00:09:37.397 PLE Aggregate Log Change Notices: Not Supported 00:09:37.397 LBA Status Info Alert Notices: Not Supported 00:09:37.397 EGE Aggregate Log Change Notices: Not Supported 00:09:37.397 Normal NVM Subsystem Shutdown event: Not Supported 00:09:37.397 Zone Descriptor Change Notices: Not Supported 00:09:37.397 Discovery Log Change Notices: Not Supported 00:09:37.397 Controller Attributes 00:09:37.397 128-bit Host Identifier: Supported 00:09:37.397 Non-Operational Permissive Mode: Not Supported 00:09:37.398 NVM Sets: Not Supported 00:09:37.398 Read Recovery Levels: Not Supported 00:09:37.398 Endurance Groups: Not Supported 00:09:37.398 Predictable Latency Mode: Not Supported 00:09:37.398 Traffic Based Keep ALive: Not Supported 00:09:37.398 Namespace Granularity: Not Supported 00:09:37.398 SQ Associations: Not Supported 00:09:37.398 UUID List: Not Supported 00:09:37.398 Multi-Domain Subsystem: Not Supported 00:09:37.398 Fixed Capacity Management: Not Supported 00:09:37.398 Variable Capacity Management: Not Supported 00:09:37.398 Delete Endurance Group: Not Supported 00:09:37.398 Delete NVM Set: Not Supported 00:09:37.398 Extended LBA Formats Supported: Not Supported 00:09:37.398 Flexible Data Placement Supported: Not Supported 00:09:37.398 00:09:37.398 Controller Memory Buffer Support 00:09:37.398 ================================ 00:09:37.398 Supported: No 00:09:37.398 00:09:37.398 Persistent Memory Region Support 00:09:37.398 ================================ 00:09:37.398 Supported: No 00:09:37.398 00:09:37.398 Admin Command Set Attributes 00:09:37.398 ============================ 00:09:37.398 Security Send/Receive: Not Supported 00:09:37.398 Format NVM: Not Supported 00:09:37.398 Firmware Activate/Download: Not Supported 00:09:37.398 Namespace Management: Not Supported 00:09:37.398 Device Self-Test: Not Supported 00:09:37.398 Directives: Not Supported 00:09:37.398 NVMe-MI: Not Supported 00:09:37.398 Virtualization Management: Not Supported 00:09:37.398 Doorbell Buffer Config: Not Supported 00:09:37.398 Get LBA Status Capability: Not Supported 00:09:37.398 Command & Feature Lockdown Capability: Not Supported 00:09:37.398 Abort Command Limit: 4 00:09:37.398 Async Event Request Limit: 4 00:09:37.398 Number of Firmware Slots: N/A 00:09:37.398 Firmware Slot 1 Read-Only: N/A 00:09:37.398 Firmware Activation Without Reset: N/A 00:09:37.398 Multiple Update Detection Support: N/A 00:09:37.398 Firmware Update Granularity: No Information Provided 00:09:37.398 Per-Namespace SMART Log: No 00:09:37.398 Asymmetric Namespace Access Log Page: Not Supported 00:09:37.398 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:09:37.398 Command Effects Log Page: Supported 00:09:37.398 Get Log Page Extended Data: Supported 00:09:37.398 Telemetry Log Pages: Not Supported 00:09:37.398 Persistent Event Log Pages: Not Supported 00:09:37.398 Supported Log Pages Log Page: May Support 00:09:37.398 Commands Supported & Effects Log Page: Not Supported 00:09:37.398 Feature Identifiers & Effects Log Page:May Support 00:09:37.398 NVMe-MI Commands & Effects Log Page: May Support 00:09:37.398 Data Area 4 for Telemetry Log: Not Supported 00:09:37.398 Error Log Page Entries Supported: 128 00:09:37.398 Keep Alive: Supported 00:09:37.398 Keep Alive Granularity: 10000 ms 00:09:37.398 00:09:37.398 NVM Command Set Attributes 00:09:37.398 ========================== 00:09:37.398 Submission Queue Entry Size 00:09:37.398 Max: 64 00:09:37.398 Min: 64 00:09:37.398 Completion Queue Entry Size 00:09:37.398 Max: 16 00:09:37.398 Min: 16 00:09:37.398 Number of Namespaces: 32 00:09:37.398 Compare Command: Supported 00:09:37.398 Write Uncorrectable Command: Not Supported 00:09:37.398 Dataset Management Command: Supported 00:09:37.398 Write Zeroes Command: Supported 00:09:37.398 Set Features Save Field: Not Supported 00:09:37.398 Reservations: Not Supported 00:09:37.398 Timestamp: Not Supported 00:09:37.398 Copy: Supported 00:09:37.398 Volatile Write Cache: Present 00:09:37.398 Atomic Write Unit (Normal): 1 00:09:37.398 Atomic Write Unit (PFail): 1 00:09:37.398 Atomic Compare & Write Unit: 1 00:09:37.398 Fused Compare & Write: Supported 00:09:37.398 Scatter-Gather List 00:09:37.398 SGL Command Set: Supported (Dword aligned) 00:09:37.398 SGL Keyed: Not Supported 00:09:37.398 SGL Bit Bucket Descriptor: Not Supported 00:09:37.398 SGL Metadata Pointer: Not Supported 00:09:37.398 Oversized SGL: Not Supported 00:09:37.398 SGL Metadata Address: Not Supported 00:09:37.398 SGL Offset: Not Supported 00:09:37.398 Transport SGL Data Block: Not Supported 00:09:37.398 Replay Protected Memory Block: Not Supported 00:09:37.398 00:09:37.398 Firmware Slot Information 00:09:37.398 ========================= 00:09:37.398 Active slot: 1 00:09:37.398 Slot 1 Firmware Revision: 24.05 00:09:37.398 00:09:37.398 00:09:37.398 Commands Supported and Effects 00:09:37.398 ============================== 00:09:37.398 Admin Commands 00:09:37.398 -------------- 00:09:37.398 Get Log Page (02h): Supported 00:09:37.398 Identify (06h): Supported 00:09:37.398 Abort (08h): Supported 00:09:37.398 Set Features (09h): Supported 00:09:37.398 Get Features (0Ah): Supported 00:09:37.398 Asynchronous Event Request (0Ch): Supported 00:09:37.398 Keep Alive (18h): Supported 00:09:37.398 I/O Commands 00:09:37.398 ------------ 00:09:37.398 Flush (00h): Supported LBA-Change 00:09:37.398 Write (01h): Supported LBA-Change 00:09:37.398 Read (02h): Supported 00:09:37.398 Compare (05h): Supported 00:09:37.398 Write Zeroes (08h): Supported LBA-Change 00:09:37.398 Dataset Management (09h): Supported LBA-Change 00:09:37.398 Copy (19h): Supported LBA-Change 00:09:37.398 Unknown (79h): Supported LBA-Change 00:09:37.398 Unknown (7Ah): Supported 00:09:37.398 00:09:37.398 Error Log 00:09:37.398 ========= 00:09:37.398 00:09:37.398 Arbitration 00:09:37.398 =========== 00:09:37.398 Arbitration Burst: 1 00:09:37.398 00:09:37.398 Power Management 00:09:37.398 ================ 00:09:37.398 Number of Power States: 1 00:09:37.398 Current Power State: Power State #0 00:09:37.398 Power State #0: 00:09:37.398 Max Power: 0.00 W 00:09:37.398 Non-Operational State: Operational 00:09:37.398 Entry Latency: Not Reported 00:09:37.398 Exit Latency: Not Reported 00:09:37.398 Relative Read Throughput: 0 00:09:37.398 Relative Read Latency: 0 00:09:37.398 Relative Write Throughput: 0 00:09:37.398 Relative Write Latency: 0 00:09:37.398 Idle Power: Not Reported 00:09:37.398 Active Power: Not Reported 00:09:37.398 Non-Operational Permissive Mode: Not Supported 00:09:37.398 00:09:37.398 Health Information 00:09:37.398 ================== 00:09:37.398 Critical Warnings: 00:09:37.398 Available Spare Space: OK 00:09:37.398 Temperature: OK 00:09:37.398 Device Reliability: OK 00:09:37.398 Read Only: No 00:09:37.398 Volatile Memory Backup: OK 00:09:37.398 Current Temperature: 0 Kelvin (-2[2024-05-15 10:49:53.626301] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:37.398 [2024-05-15 10:49:53.626318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:37.655 [2024-05-15 10:49:53.626356] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:09:37.655 [2024-05-15 10:49:53.626373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:37.655 [2024-05-15 10:49:53.626384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:37.656 [2024-05-15 10:49:53.626394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:37.656 [2024-05-15 10:49:53.626404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:37.656 [2024-05-15 10:49:53.626871] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:09:37.656 [2024-05-15 10:49:53.626892] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:09:37.656 [2024-05-15 10:49:53.627867] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:37.656 [2024-05-15 10:49:53.627969] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:09:37.656 [2024-05-15 10:49:53.627985] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:09:37.656 [2024-05-15 10:49:53.628879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:09:37.656 [2024-05-15 10:49:53.628900] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:09:37.656 [2024-05-15 10:49:53.628979] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:09:37.656 [2024-05-15 10:49:53.632946] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:37.656 73 Celsius) 00:09:37.656 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:37.656 Available Spare: 0% 00:09:37.656 Available Spare Threshold: 0% 00:09:37.656 Life Percentage Used: 0% 00:09:37.656 Data Units Read: 0 00:09:37.656 Data Units Written: 0 00:09:37.656 Host Read Commands: 0 00:09:37.656 Host Write Commands: 0 00:09:37.656 Controller Busy Time: 0 minutes 00:09:37.656 Power Cycles: 0 00:09:37.656 Power On Hours: 0 hours 00:09:37.656 Unsafe Shutdowns: 0 00:09:37.656 Unrecoverable Media Errors: 0 00:09:37.656 Lifetime Error Log Entries: 0 00:09:37.656 Warning Temperature Time: 0 minutes 00:09:37.656 Critical Temperature Time: 0 minutes 00:09:37.656 00:09:37.656 Number of Queues 00:09:37.656 ================ 00:09:37.656 Number of I/O Submission Queues: 127 00:09:37.656 Number of I/O Completion Queues: 127 00:09:37.656 00:09:37.656 Active Namespaces 00:09:37.656 ================= 00:09:37.656 Namespace ID:1 00:09:37.656 Error Recovery Timeout: Unlimited 00:09:37.656 Command Set Identifier: NVM (00h) 00:09:37.656 Deallocate: Supported 00:09:37.656 Deallocated/Unwritten Error: Not Supported 00:09:37.656 Deallocated Read Value: Unknown 00:09:37.656 Deallocate in Write Zeroes: Not Supported 00:09:37.656 Deallocated Guard Field: 0xFFFF 00:09:37.656 Flush: Supported 00:09:37.656 Reservation: Supported 00:09:37.656 Namespace Sharing Capabilities: Multiple Controllers 00:09:37.656 Size (in LBAs): 131072 (0GiB) 00:09:37.656 Capacity (in LBAs): 131072 (0GiB) 00:09:37.656 Utilization (in LBAs): 131072 (0GiB) 00:09:37.656 NGUID: AA3777C58ECB4C468FE3A32780233EC6 00:09:37.656 UUID: aa3777c5-8ecb-4c46-8fe3-a32780233ec6 00:09:37.656 Thin Provisioning: Not Supported 00:09:37.656 Per-NS Atomic Units: Yes 00:09:37.656 Atomic Boundary Size (Normal): 0 00:09:37.656 Atomic Boundary Size (PFail): 0 00:09:37.656 Atomic Boundary Offset: 0 00:09:37.656 Maximum Single Source Range Length: 65535 00:09:37.656 Maximum Copy Length: 65535 00:09:37.656 Maximum Source Range Count: 1 00:09:37.656 NGUID/EUI64 Never Reused: No 00:09:37.656 Namespace Write Protected: No 00:09:37.656 Number of LBA Formats: 1 00:09:37.656 Current LBA Format: LBA Format #00 00:09:37.656 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:37.656 00:09:37.656 10:49:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:37.656 EAL: No free 2048 kB hugepages reported on node 1 00:09:37.656 [2024-05-15 10:49:53.865766] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:42.994 Initializing NVMe Controllers 00:09:42.994 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:42.994 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:42.994 Initialization complete. Launching workers. 00:09:42.994 ======================================================== 00:09:42.994 Latency(us) 00:09:42.994 Device Information : IOPS MiB/s Average min max 00:09:42.994 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33669.02 131.52 3801.02 1187.56 8899.75 00:09:42.994 ======================================================== 00:09:42.994 Total : 33669.02 131.52 3801.02 1187.56 8899.75 00:09:42.994 00:09:42.994 [2024-05-15 10:49:58.884671] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:42.994 10:49:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:09:42.994 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.994 [2024-05-15 10:49:59.117792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:48.260 Initializing NVMe Controllers 00:09:48.260 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:48.260 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:09:48.260 Initialization complete. Launching workers. 00:09:48.260 ======================================================== 00:09:48.260 Latency(us) 00:09:48.260 Device Information : IOPS MiB/s Average min max 00:09:48.260 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16041.54 62.66 7984.53 6197.75 14744.13 00:09:48.260 ======================================================== 00:09:48.260 Total : 16041.54 62.66 7984.53 6197.75 14744.13 00:09:48.260 00:09:48.260 [2024-05-15 10:50:04.161116] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:48.260 10:50:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:09:48.260 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.260 [2024-05-15 10:50:04.388276] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:53.525 [2024-05-15 10:50:09.473389] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:53.525 Initializing NVMe Controllers 00:09:53.525 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:53.525 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:09:53.525 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:09:53.525 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:09:53.525 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:09:53.525 Initialization complete. Launching workers. 00:09:53.525 Starting thread on core 2 00:09:53.525 Starting thread on core 3 00:09:53.525 Starting thread on core 1 00:09:53.525 10:50:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:09:53.525 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.783 [2024-05-15 10:50:09.784424] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:57.070 [2024-05-15 10:50:12.848230] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:57.070 Initializing NVMe Controllers 00:09:57.070 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:57.070 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:57.070 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:09:57.070 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:09:57.070 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:09:57.070 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:09:57.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:09:57.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:09:57.070 Initialization complete. Launching workers. 00:09:57.070 Starting thread on core 1 with urgent priority queue 00:09:57.070 Starting thread on core 2 with urgent priority queue 00:09:57.070 Starting thread on core 3 with urgent priority queue 00:09:57.070 Starting thread on core 0 with urgent priority queue 00:09:57.070 SPDK bdev Controller (SPDK1 ) core 0: 5354.00 IO/s 18.68 secs/100000 ios 00:09:57.070 SPDK bdev Controller (SPDK1 ) core 1: 5043.33 IO/s 19.83 secs/100000 ios 00:09:57.070 SPDK bdev Controller (SPDK1 ) core 2: 5537.67 IO/s 18.06 secs/100000 ios 00:09:57.070 SPDK bdev Controller (SPDK1 ) core 3: 5490.33 IO/s 18.21 secs/100000 ios 00:09:57.070 ======================================================== 00:09:57.070 00:09:57.070 10:50:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:57.070 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.070 [2024-05-15 10:50:13.161398] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:57.070 Initializing NVMe Controllers 00:09:57.070 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:57.070 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:57.070 Namespace ID: 1 size: 0GB 00:09:57.070 Initialization complete. 00:09:57.070 INFO: using host memory buffer for IO 00:09:57.070 Hello world! 00:09:57.070 [2024-05-15 10:50:13.199027] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:57.070 10:50:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:09:57.070 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.327 [2024-05-15 10:50:13.508413] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:58.702 Initializing NVMe Controllers 00:09:58.702 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:58.702 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:58.702 Initialization complete. Launching workers. 00:09:58.702 submit (in ns) avg, min, max = 5453.2, 3515.6, 4022554.4 00:09:58.702 complete (in ns) avg, min, max = 30044.9, 2072.2, 7248577.8 00:09:58.702 00:09:58.702 Submit histogram 00:09:58.702 ================ 00:09:58.702 Range in us Cumulative Count 00:09:58.702 3.508 - 3.532: 0.1273% ( 17) 00:09:58.702 3.532 - 3.556: 0.6592% ( 71) 00:09:58.702 3.556 - 3.579: 2.3895% ( 231) 00:09:58.702 3.579 - 3.603: 5.4906% ( 414) 00:09:58.702 3.603 - 3.627: 13.2509% ( 1036) 00:09:58.702 3.627 - 3.650: 22.1723% ( 1191) 00:09:58.702 3.650 - 3.674: 32.9963% ( 1445) 00:09:58.702 3.674 - 3.698: 41.6704% ( 1158) 00:09:58.702 3.698 - 3.721: 49.3558% ( 1026) 00:09:58.702 3.721 - 3.745: 53.6704% ( 576) 00:09:58.702 3.745 - 3.769: 57.5730% ( 521) 00:09:58.702 3.769 - 3.793: 61.0637% ( 466) 00:09:58.702 3.793 - 3.816: 64.1498% ( 412) 00:09:58.702 3.816 - 3.840: 67.3109% ( 422) 00:09:58.702 3.840 - 3.864: 71.0861% ( 504) 00:09:58.702 3.864 - 3.887: 75.0712% ( 532) 00:09:58.702 3.887 - 3.911: 79.4757% ( 588) 00:09:58.702 3.911 - 3.935: 83.4232% ( 527) 00:09:58.702 3.935 - 3.959: 85.7753% ( 314) 00:09:58.702 3.959 - 3.982: 87.5131% ( 232) 00:09:58.702 3.982 - 4.006: 89.0712% ( 208) 00:09:58.702 4.006 - 4.030: 90.3221% ( 167) 00:09:58.702 4.030 - 4.053: 91.2509% ( 124) 00:09:58.702 4.053 - 4.077: 92.1348% ( 118) 00:09:58.702 4.077 - 4.101: 93.1386% ( 134) 00:09:58.702 4.101 - 4.124: 93.9101% ( 103) 00:09:58.703 4.124 - 4.148: 94.8240% ( 122) 00:09:58.703 4.148 - 4.172: 95.4232% ( 80) 00:09:58.703 4.172 - 4.196: 95.8727% ( 60) 00:09:58.703 4.196 - 4.219: 96.1723% ( 40) 00:09:58.703 4.219 - 4.243: 96.4569% ( 38) 00:09:58.703 4.243 - 4.267: 96.6367% ( 24) 00:09:58.703 4.267 - 4.290: 96.7640% ( 17) 00:09:58.703 4.290 - 4.314: 96.8989% ( 18) 00:09:58.703 4.314 - 4.338: 97.0337% ( 18) 00:09:58.703 4.338 - 4.361: 97.1536% ( 16) 00:09:58.703 4.361 - 4.385: 97.2434% ( 12) 00:09:58.703 4.385 - 4.409: 97.3333% ( 12) 00:09:58.703 4.409 - 4.433: 97.3933% ( 8) 00:09:58.703 4.433 - 4.456: 97.4232% ( 4) 00:09:58.703 4.456 - 4.480: 97.4607% ( 5) 00:09:58.703 4.480 - 4.504: 97.4757% ( 2) 00:09:58.703 4.504 - 4.527: 97.4906% ( 2) 00:09:58.703 4.527 - 4.551: 97.5131% ( 3) 00:09:58.703 4.551 - 4.575: 97.5356% ( 3) 00:09:58.703 4.575 - 4.599: 97.5506% ( 2) 00:09:58.703 4.599 - 4.622: 97.5655% ( 2) 00:09:58.703 4.622 - 4.646: 97.5805% ( 2) 00:09:58.703 4.717 - 4.741: 97.6105% ( 4) 00:09:58.703 4.741 - 4.764: 97.6180% ( 1) 00:09:58.703 4.764 - 4.788: 97.6255% ( 1) 00:09:58.703 4.788 - 4.812: 97.6479% ( 3) 00:09:58.703 4.812 - 4.836: 97.6929% ( 6) 00:09:58.703 4.836 - 4.859: 97.7228% ( 4) 00:09:58.703 4.859 - 4.883: 97.7528% ( 4) 00:09:58.703 4.883 - 4.907: 97.8052% ( 7) 00:09:58.703 4.907 - 4.930: 97.8652% ( 8) 00:09:58.703 4.930 - 4.954: 97.8951% ( 4) 00:09:58.703 4.954 - 4.978: 97.9101% ( 2) 00:09:58.703 4.978 - 5.001: 97.9401% ( 4) 00:09:58.703 5.001 - 5.025: 97.9551% ( 2) 00:09:58.703 5.025 - 5.049: 98.0000% ( 6) 00:09:58.703 5.049 - 5.073: 98.0449% ( 6) 00:09:58.703 5.073 - 5.096: 98.0749% ( 4) 00:09:58.703 5.096 - 5.120: 98.1199% ( 6) 00:09:58.703 5.120 - 5.144: 98.1573% ( 5) 00:09:58.703 5.144 - 5.167: 98.1798% ( 3) 00:09:58.703 5.167 - 5.191: 98.2097% ( 4) 00:09:58.703 5.191 - 5.215: 98.2547% ( 6) 00:09:58.703 5.215 - 5.239: 98.2772% ( 3) 00:09:58.703 5.262 - 5.286: 98.2846% ( 1) 00:09:58.703 5.286 - 5.310: 98.2921% ( 1) 00:09:58.703 5.310 - 5.333: 98.3071% ( 2) 00:09:58.703 5.357 - 5.381: 98.3146% ( 1) 00:09:58.703 5.381 - 5.404: 98.3371% ( 3) 00:09:58.703 5.428 - 5.452: 98.3521% ( 2) 00:09:58.703 5.499 - 5.523: 98.3596% ( 1) 00:09:58.703 5.523 - 5.547: 98.3670% ( 1) 00:09:58.703 5.547 - 5.570: 98.3745% ( 1) 00:09:58.703 5.570 - 5.594: 98.3970% ( 3) 00:09:58.703 5.760 - 5.784: 98.4045% ( 1) 00:09:58.703 5.784 - 5.807: 98.4120% ( 1) 00:09:58.703 5.807 - 5.831: 98.4195% ( 1) 00:09:58.703 5.879 - 5.902: 98.4270% ( 1) 00:09:58.703 5.902 - 5.926: 98.4345% ( 1) 00:09:58.703 5.997 - 6.021: 98.4494% ( 2) 00:09:58.703 6.068 - 6.116: 98.4719% ( 3) 00:09:58.703 6.116 - 6.163: 98.4794% ( 1) 00:09:58.703 6.163 - 6.210: 98.4869% ( 1) 00:09:58.703 6.258 - 6.305: 98.4944% ( 1) 00:09:58.703 6.305 - 6.353: 98.5094% ( 2) 00:09:58.703 6.590 - 6.637: 98.5169% ( 1) 00:09:58.703 6.732 - 6.779: 98.5243% ( 1) 00:09:58.703 6.779 - 6.827: 98.5393% ( 2) 00:09:58.703 6.827 - 6.874: 98.5468% ( 1) 00:09:58.703 6.969 - 7.016: 98.5618% ( 2) 00:09:58.703 7.016 - 7.064: 98.5693% ( 1) 00:09:58.703 7.111 - 7.159: 98.5768% ( 1) 00:09:58.703 7.159 - 7.206: 98.5843% ( 1) 00:09:58.703 7.206 - 7.253: 98.5918% ( 1) 00:09:58.703 7.253 - 7.301: 98.6142% ( 3) 00:09:58.703 7.348 - 7.396: 98.6217% ( 1) 00:09:58.703 7.443 - 7.490: 98.6292% ( 1) 00:09:58.703 7.490 - 7.538: 98.6442% ( 2) 00:09:58.703 7.585 - 7.633: 98.6592% ( 2) 00:09:58.703 7.633 - 7.680: 98.6667% ( 1) 00:09:58.703 7.727 - 7.775: 98.6742% ( 1) 00:09:58.703 7.822 - 7.870: 98.6816% ( 1) 00:09:58.703 7.870 - 7.917: 98.7041% ( 3) 00:09:58.703 7.917 - 7.964: 98.7116% ( 1) 00:09:58.703 7.964 - 8.012: 98.7191% ( 1) 00:09:58.703 8.012 - 8.059: 98.7416% ( 3) 00:09:58.703 8.059 - 8.107: 98.7491% ( 1) 00:09:58.703 8.107 - 8.154: 98.7640% ( 2) 00:09:58.703 8.201 - 8.249: 98.7790% ( 2) 00:09:58.703 8.296 - 8.344: 98.7940% ( 2) 00:09:58.703 8.391 - 8.439: 98.8015% ( 1) 00:09:58.703 8.770 - 8.818: 98.8165% ( 2) 00:09:58.703 8.865 - 8.913: 98.8240% ( 1) 00:09:58.703 9.007 - 9.055: 98.8315% ( 1) 00:09:58.703 9.197 - 9.244: 98.8464% ( 2) 00:09:58.703 9.244 - 9.292: 98.8539% ( 1) 00:09:58.703 9.387 - 9.434: 98.8689% ( 2) 00:09:58.703 9.719 - 9.766: 98.8764% ( 1) 00:09:58.703 9.813 - 9.861: 98.8839% ( 1) 00:09:58.703 9.861 - 9.908: 98.8914% ( 1) 00:09:58.703 10.477 - 10.524: 98.8989% ( 1) 00:09:58.703 10.714 - 10.761: 98.9064% ( 1) 00:09:58.703 10.904 - 10.951: 98.9139% ( 1) 00:09:58.703 10.999 - 11.046: 98.9213% ( 1) 00:09:58.703 11.141 - 11.188: 98.9288% ( 1) 00:09:58.703 11.567 - 11.615: 98.9363% ( 1) 00:09:58.703 11.994 - 12.041: 98.9438% ( 1) 00:09:58.703 12.089 - 12.136: 98.9513% ( 1) 00:09:58.703 12.231 - 12.326: 98.9588% ( 1) 00:09:58.703 12.421 - 12.516: 98.9663% ( 1) 00:09:58.703 12.516 - 12.610: 98.9738% ( 1) 00:09:58.703 12.705 - 12.800: 98.9888% ( 2) 00:09:58.703 12.895 - 12.990: 98.9963% ( 1) 00:09:58.703 12.990 - 13.084: 99.0112% ( 2) 00:09:58.703 13.179 - 13.274: 99.0187% ( 1) 00:09:58.703 13.274 - 13.369: 99.0262% ( 1) 00:09:58.703 13.464 - 13.559: 99.0337% ( 1) 00:09:58.703 13.748 - 13.843: 99.0412% ( 1) 00:09:58.703 13.843 - 13.938: 99.0487% ( 1) 00:09:58.703 14.033 - 14.127: 99.0562% ( 1) 00:09:58.703 14.696 - 14.791: 99.0637% ( 1) 00:09:58.703 14.791 - 14.886: 99.0712% ( 1) 00:09:58.703 14.886 - 14.981: 99.0787% ( 1) 00:09:58.703 17.256 - 17.351: 99.0861% ( 1) 00:09:58.703 17.351 - 17.446: 99.1011% ( 2) 00:09:58.703 17.446 - 17.541: 99.1086% ( 1) 00:09:58.703 17.541 - 17.636: 99.1236% ( 2) 00:09:58.703 17.636 - 17.730: 99.1685% ( 6) 00:09:58.703 17.730 - 17.825: 99.2135% ( 6) 00:09:58.703 17.825 - 17.920: 99.2884% ( 10) 00:09:58.703 17.920 - 18.015: 99.3408% ( 7) 00:09:58.703 18.015 - 18.110: 99.4082% ( 9) 00:09:58.703 18.110 - 18.204: 99.4682% ( 8) 00:09:58.703 18.204 - 18.299: 99.5356% ( 9) 00:09:58.703 18.299 - 18.394: 99.5955% ( 8) 00:09:58.703 18.394 - 18.489: 99.6704% ( 10) 00:09:58.703 18.489 - 18.584: 99.7079% ( 5) 00:09:58.703 18.584 - 18.679: 99.7154% ( 1) 00:09:58.703 18.679 - 18.773: 99.7303% ( 2) 00:09:58.703 18.773 - 18.868: 99.7603% ( 4) 00:09:58.703 18.868 - 18.963: 99.7753% ( 2) 00:09:58.703 18.963 - 19.058: 99.7978% ( 3) 00:09:58.703 19.058 - 19.153: 99.8052% ( 1) 00:09:58.703 19.153 - 19.247: 99.8202% ( 2) 00:09:58.703 19.247 - 19.342: 99.8277% ( 1) 00:09:58.703 19.342 - 19.437: 99.8352% ( 1) 00:09:58.703 19.437 - 19.532: 99.8502% ( 2) 00:09:58.703 19.627 - 19.721: 99.8577% ( 1) 00:09:58.703 19.721 - 19.816: 99.8727% ( 2) 00:09:58.703 20.006 - 20.101: 99.8801% ( 1) 00:09:58.703 20.196 - 20.290: 99.8876% ( 1) 00:09:58.703 20.290 - 20.385: 99.8951% ( 1) 00:09:58.703 20.670 - 20.764: 99.9026% ( 1) 00:09:58.703 21.049 - 21.144: 99.9101% ( 1) 00:09:58.703 21.144 - 21.239: 99.9176% ( 1) 00:09:58.703 22.376 - 22.471: 99.9251% ( 1) 00:09:58.703 23.419 - 23.514: 99.9326% ( 1) 00:09:58.703 23.704 - 23.799: 99.9551% ( 3) 00:09:58.703 32.806 - 32.996: 99.9625% ( 1) 00:09:58.703 3980.705 - 4004.978: 99.9775% ( 2) 00:09:58.703 4004.978 - 4029.250: 100.0000% ( 3) 00:09:58.703 00:09:58.703 Complete histogram 00:09:58.703 ================== 00:09:58.703 Range in us Cumulative Count 00:09:58.703 2.062 - 2.074: 0.1124% ( 15) 00:09:58.703 2.074 - 2.086: 17.8052% ( 2362) 00:09:58.703 2.086 - 2.098: 36.2697% ( 2465) 00:09:58.703 2.098 - 2.110: 37.9551% ( 225) 00:09:58.703 2.110 - 2.121: 53.8577% ( 2123) 00:09:58.703 2.121 - 2.133: 59.7753% ( 790) 00:09:58.703 2.133 - 2.145: 62.0449% ( 303) 00:09:58.703 2.145 - 2.157: 70.9513% ( 1189) 00:09:58.703 2.157 - 2.169: 74.1199% ( 423) 00:09:58.703 2.169 - 2.181: 75.4831% ( 182) 00:09:58.703 2.181 - 2.193: 79.8727% ( 586) 00:09:58.703 2.193 - 2.204: 81.1985% ( 177) 00:09:58.703 2.204 - 2.216: 82.0749% ( 117) 00:09:58.703 2.216 - 2.228: 87.4232% ( 714) 00:09:58.703 2.228 - 2.240: 89.5581% ( 285) 00:09:58.703 2.240 - 2.252: 89.9625% ( 54) 00:09:58.703 2.252 - 2.264: 92.1873% ( 297) 00:09:58.703 2.264 - 2.276: 92.9438% ( 101) 00:09:58.703 2.276 - 2.287: 93.3708% ( 57) 00:09:58.703 2.287 - 2.299: 94.5094% ( 152) 00:09:58.703 2.299 - 2.311: 94.8989% ( 52) 00:09:58.703 2.311 - 2.323: 95.0337% ( 18) 00:09:58.703 2.323 - 2.335: 95.1386% ( 14) 00:09:58.703 2.335 - 2.347: 95.2360% ( 13) 00:09:58.703 2.347 - 2.359: 95.3633% ( 17) 00:09:58.703 2.359 - 2.370: 95.6929% ( 44) 00:09:58.703 2.370 - 2.382: 96.2172% ( 70) 00:09:58.703 2.382 - 2.394: 96.4869% ( 36) 00:09:58.703 2.394 - 2.406: 96.7191% ( 31) 00:09:58.704 2.406 - 2.418: 96.9288% ( 28) 00:09:58.704 2.418 - 2.430: 97.1536% ( 30) 00:09:58.704 2.430 - 2.441: 97.3783% ( 30) 00:09:58.704 2.441 - 2.453: 97.6330% ( 34) 00:09:58.704 2.453 - 2.465: 97.7678% ( 18) 00:09:58.704 2.465 - 2.477: 97.9176% ( 20) 00:09:58.704 2.477 - 2.489: 98.0674% ( 20) 00:09:58.704 2.489 - 2.501: 98.1573% ( 12) 00:09:58.704 2.501 - 2.513: 98.2097% ( 7) 00:09:58.704 2.513 - 2.524: 98.2472% ( 5) 00:09:58.704 2.524 - 2.536: 98.2622% ( 2) 00:09:58.704 2.536 - 2.548: 98.2846% ( 3) 00:09:58.704 2.548 - 2.560: 98.2996% ( 2) 00:09:58.704 2.560 - 2.572: 98.3221% ( 3) 00:09:58.704 2.572 - 2.584: 98.3446% ( 3) 00:09:58.704 2.596 - 2.607: 98.3521% ( 1) 00:09:58.704 2.607 - 2.619: 98.3596% ( 1) 00:09:58.704 2.631 - 2.643: 98.3745% ( 2) 00:09:58.704 2.643 - 2.655: 98.3820% ( 1) 00:09:58.704 2.690 - 2.702: 98.3895% ( 1) 00:09:58.704 2.702 - 2.714: 98.3970% ( 1) 00:09:58.704 2.738 - 2.750: 98.4045% ( 1) 00:09:58.704 2.785 - 2.797: 98.4120% ( 1) 00:09:58.704 2.809 - 2.821: 98.4195% ( 1) 00:09:58.704 2.821 - 2.833: 98.4270% ( 1) 00:09:58.704 2.833 - 2.844: 98.4345% ( 1) 00:09:58.704 2.963 - 2.975: 98.4494% ( 2) 00:09:58.704 3.153 - 3.176: 98.4569% ( 1) 00:09:58.704 3.224 - 3.247: 98.4644% ( 1) 00:09:58.704 3.295 - 3.319: 98.4719% ( 1) 00:09:58.704 3.319 - 3.342: 98.4869% ( 2) 00:09:58.704 3.342 - 3.366: 98.4944% ( 1) 00:09:58.704 3.413 - 3.437: 98.5019% ( 1) 00:09:58.704 3.484 - 3.508: 98.5243% ( 3) 00:09:58.704 3.508 - 3.532: 98.5318% ( 1) 00:09:58.704 3.556 - 3.579: 9[2024-05-15 10:50:14.533870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:58.704 8.5468% ( 2) 00:09:58.704 3.603 - 3.627: 98.5543% ( 1) 00:09:58.704 3.627 - 3.650: 98.5618% ( 1) 00:09:58.704 3.650 - 3.674: 98.5693% ( 1) 00:09:58.704 3.721 - 3.745: 98.5768% ( 1) 00:09:58.704 3.745 - 3.769: 98.5843% ( 1) 00:09:58.704 3.887 - 3.911: 98.5918% ( 1) 00:09:58.704 4.954 - 4.978: 98.5993% ( 1) 00:09:58.704 5.120 - 5.144: 98.6067% ( 1) 00:09:58.704 5.333 - 5.357: 98.6142% ( 1) 00:09:58.704 5.855 - 5.879: 98.6217% ( 1) 00:09:58.704 6.116 - 6.163: 98.6292% ( 1) 00:09:58.704 6.210 - 6.258: 98.6367% ( 1) 00:09:58.704 6.447 - 6.495: 98.6442% ( 1) 00:09:58.704 6.637 - 6.684: 98.6592% ( 2) 00:09:58.704 6.732 - 6.779: 98.6667% ( 1) 00:09:58.704 6.874 - 6.921: 98.6742% ( 1) 00:09:58.704 7.064 - 7.111: 98.6816% ( 1) 00:09:58.704 7.206 - 7.253: 98.6891% ( 1) 00:09:58.704 7.490 - 7.538: 98.6966% ( 1) 00:09:58.704 7.538 - 7.585: 98.7041% ( 1) 00:09:58.704 7.870 - 7.917: 98.7116% ( 1) 00:09:58.704 8.628 - 8.676: 98.7191% ( 1) 00:09:58.704 11.141 - 11.188: 98.7266% ( 1) 00:09:58.704 11.236 - 11.283: 98.7341% ( 1) 00:09:58.704 11.283 - 11.330: 98.7416% ( 1) 00:09:58.704 12.610 - 12.705: 98.7491% ( 1) 00:09:58.704 15.644 - 15.739: 98.7640% ( 2) 00:09:58.704 15.739 - 15.834: 98.7865% ( 3) 00:09:58.704 15.834 - 15.929: 98.8015% ( 2) 00:09:58.704 16.024 - 16.119: 98.8240% ( 3) 00:09:58.704 16.119 - 16.213: 98.8464% ( 3) 00:09:58.704 16.213 - 16.308: 98.8614% ( 2) 00:09:58.704 16.308 - 16.403: 98.8839% ( 3) 00:09:58.704 16.403 - 16.498: 98.9363% ( 7) 00:09:58.704 16.498 - 16.593: 98.9663% ( 4) 00:09:58.704 16.593 - 16.687: 98.9888% ( 3) 00:09:58.704 16.687 - 16.782: 99.0187% ( 4) 00:09:58.704 16.782 - 16.877: 99.1161% ( 13) 00:09:58.704 16.877 - 16.972: 99.1461% ( 4) 00:09:58.704 16.972 - 17.067: 99.1685% ( 3) 00:09:58.704 17.161 - 17.256: 99.1835% ( 2) 00:09:58.704 17.256 - 17.351: 99.1910% ( 1) 00:09:58.704 17.351 - 17.446: 99.2060% ( 2) 00:09:58.704 17.541 - 17.636: 99.2285% ( 3) 00:09:58.704 17.636 - 17.730: 99.2434% ( 2) 00:09:58.704 17.730 - 17.825: 99.2584% ( 2) 00:09:58.704 17.825 - 17.920: 99.2659% ( 1) 00:09:58.704 17.920 - 18.015: 99.2734% ( 1) 00:09:58.704 18.110 - 18.204: 99.2809% ( 1) 00:09:58.704 18.204 - 18.299: 99.2884% ( 1) 00:09:58.704 18.299 - 18.394: 99.2959% ( 1) 00:09:58.704 18.584 - 18.679: 99.3034% ( 1) 00:09:58.704 800.996 - 807.064: 99.3109% ( 1) 00:09:58.704 3009.801 - 3021.938: 99.3184% ( 1) 00:09:58.704 3980.705 - 4004.978: 99.7828% ( 62) 00:09:58.704 4004.978 - 4029.250: 99.9850% ( 27) 00:09:58.704 4029.250 - 4053.523: 99.9925% ( 1) 00:09:58.704 7233.233 - 7281.778: 100.0000% ( 1) 00:09:58.704 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:58.704 [ 00:09:58.704 { 00:09:58.704 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:58.704 "subtype": "Discovery", 00:09:58.704 "listen_addresses": [], 00:09:58.704 "allow_any_host": true, 00:09:58.704 "hosts": [] 00:09:58.704 }, 00:09:58.704 { 00:09:58.704 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:58.704 "subtype": "NVMe", 00:09:58.704 "listen_addresses": [ 00:09:58.704 { 00:09:58.704 "trtype": "VFIOUSER", 00:09:58.704 "adrfam": "IPv4", 00:09:58.704 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:58.704 "trsvcid": "0" 00:09:58.704 } 00:09:58.704 ], 00:09:58.704 "allow_any_host": true, 00:09:58.704 "hosts": [], 00:09:58.704 "serial_number": "SPDK1", 00:09:58.704 "model_number": "SPDK bdev Controller", 00:09:58.704 "max_namespaces": 32, 00:09:58.704 "min_cntlid": 1, 00:09:58.704 "max_cntlid": 65519, 00:09:58.704 "namespaces": [ 00:09:58.704 { 00:09:58.704 "nsid": 1, 00:09:58.704 "bdev_name": "Malloc1", 00:09:58.704 "name": "Malloc1", 00:09:58.704 "nguid": "AA3777C58ECB4C468FE3A32780233EC6", 00:09:58.704 "uuid": "aa3777c5-8ecb-4c46-8fe3-a32780233ec6" 00:09:58.704 } 00:09:58.704 ] 00:09:58.704 }, 00:09:58.704 { 00:09:58.704 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:58.704 "subtype": "NVMe", 00:09:58.704 "listen_addresses": [ 00:09:58.704 { 00:09:58.704 "trtype": "VFIOUSER", 00:09:58.704 "adrfam": "IPv4", 00:09:58.704 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:58.704 "trsvcid": "0" 00:09:58.704 } 00:09:58.704 ], 00:09:58.704 "allow_any_host": true, 00:09:58.704 "hosts": [], 00:09:58.704 "serial_number": "SPDK2", 00:09:58.704 "model_number": "SPDK bdev Controller", 00:09:58.704 "max_namespaces": 32, 00:09:58.704 "min_cntlid": 1, 00:09:58.704 "max_cntlid": 65519, 00:09:58.704 "namespaces": [ 00:09:58.704 { 00:09:58.704 "nsid": 1, 00:09:58.704 "bdev_name": "Malloc2", 00:09:58.704 "name": "Malloc2", 00:09:58.704 "nguid": "27D5C999BB18475385A1885135EC726C", 00:09:58.704 "uuid": "27d5c999-bb18-4753-85a1-885135ec726c" 00:09:58.704 } 00:09:58.704 ] 00:09:58.704 } 00:09:58.704 ] 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2744531 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:09:58.704 10:50:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:09:58.704 EAL: No free 2048 kB hugepages reported on node 1 00:09:58.963 [2024-05-15 10:50:15.047421] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:09:58.963 Malloc3 00:09:58.963 10:50:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:09:59.221 [2024-05-15 10:50:15.383953] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:09:59.221 10:50:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:09:59.221 Asynchronous Event Request test 00:09:59.221 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:09:59.221 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:09:59.221 Registering asynchronous event callbacks... 00:09:59.221 Starting namespace attribute notice tests for all controllers... 00:09:59.221 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:09:59.221 aer_cb - Changed Namespace 00:09:59.221 Cleaning up... 00:09:59.479 [ 00:09:59.479 { 00:09:59.479 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:59.479 "subtype": "Discovery", 00:09:59.479 "listen_addresses": [], 00:09:59.479 "allow_any_host": true, 00:09:59.479 "hosts": [] 00:09:59.479 }, 00:09:59.479 { 00:09:59.479 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:09:59.479 "subtype": "NVMe", 00:09:59.479 "listen_addresses": [ 00:09:59.479 { 00:09:59.479 "trtype": "VFIOUSER", 00:09:59.479 "adrfam": "IPv4", 00:09:59.479 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:09:59.479 "trsvcid": "0" 00:09:59.479 } 00:09:59.479 ], 00:09:59.479 "allow_any_host": true, 00:09:59.479 "hosts": [], 00:09:59.479 "serial_number": "SPDK1", 00:09:59.479 "model_number": "SPDK bdev Controller", 00:09:59.479 "max_namespaces": 32, 00:09:59.479 "min_cntlid": 1, 00:09:59.479 "max_cntlid": 65519, 00:09:59.479 "namespaces": [ 00:09:59.479 { 00:09:59.479 "nsid": 1, 00:09:59.479 "bdev_name": "Malloc1", 00:09:59.479 "name": "Malloc1", 00:09:59.479 "nguid": "AA3777C58ECB4C468FE3A32780233EC6", 00:09:59.479 "uuid": "aa3777c5-8ecb-4c46-8fe3-a32780233ec6" 00:09:59.479 }, 00:09:59.479 { 00:09:59.479 "nsid": 2, 00:09:59.479 "bdev_name": "Malloc3", 00:09:59.479 "name": "Malloc3", 00:09:59.479 "nguid": "7AB187B2387C4E8E81579D945BADF1C4", 00:09:59.479 "uuid": "7ab187b2-387c-4e8e-8157-9d945badf1c4" 00:09:59.479 } 00:09:59.479 ] 00:09:59.479 }, 00:09:59.479 { 00:09:59.479 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:09:59.479 "subtype": "NVMe", 00:09:59.479 "listen_addresses": [ 00:09:59.479 { 00:09:59.479 "trtype": "VFIOUSER", 00:09:59.479 "adrfam": "IPv4", 00:09:59.479 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:09:59.479 "trsvcid": "0" 00:09:59.479 } 00:09:59.479 ], 00:09:59.479 "allow_any_host": true, 00:09:59.479 "hosts": [], 00:09:59.479 "serial_number": "SPDK2", 00:09:59.479 "model_number": "SPDK bdev Controller", 00:09:59.479 "max_namespaces": 32, 00:09:59.479 "min_cntlid": 1, 00:09:59.479 "max_cntlid": 65519, 00:09:59.479 "namespaces": [ 00:09:59.479 { 00:09:59.480 "nsid": 1, 00:09:59.480 "bdev_name": "Malloc2", 00:09:59.480 "name": "Malloc2", 00:09:59.480 "nguid": "27D5C999BB18475385A1885135EC726C", 00:09:59.480 "uuid": "27d5c999-bb18-4753-85a1-885135ec726c" 00:09:59.480 } 00:09:59.480 ] 00:09:59.480 } 00:09:59.480 ] 00:09:59.480 10:50:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2744531 00:09:59.480 10:50:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:09:59.480 10:50:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:09:59.480 10:50:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:09:59.480 10:50:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:09:59.480 [2024-05-15 10:50:15.656574] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:09:59.480 [2024-05-15 10:50:15.656624] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2744553 ] 00:09:59.480 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.480 [2024-05-15 10:50:15.690156] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:09:59.480 [2024-05-15 10:50:15.698273] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:59.480 [2024-05-15 10:50:15.698302] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f16ab180000 00:09:59.480 [2024-05-15 10:50:15.699279] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:59.480 [2024-05-15 10:50:15.700286] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:59.480 [2024-05-15 10:50:15.701311] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:59.480 [2024-05-15 10:50:15.702303] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:59.480 [2024-05-15 10:50:15.703310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:59.480 [2024-05-15 10:50:15.704313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:59.480 [2024-05-15 10:50:15.705332] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:09:59.480 [2024-05-15 10:50:15.706324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:09:59.480 [2024-05-15 10:50:15.707337] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:09:59.480 [2024-05-15 10:50:15.707362] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f16ab175000 00:09:59.480 [2024-05-15 10:50:15.708499] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:59.740 [2024-05-15 10:50:15.724792] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:09:59.740 [2024-05-15 10:50:15.724826] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:09:59.740 [2024-05-15 10:50:15.729958] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:59.740 [2024-05-15 10:50:15.730034] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:09:59.740 [2024-05-15 10:50:15.730137] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:09:59.740 [2024-05-15 10:50:15.730161] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:09:59.740 [2024-05-15 10:50:15.730171] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:09:59.740 [2024-05-15 10:50:15.730971] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:09:59.740 [2024-05-15 10:50:15.730993] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:09:59.740 [2024-05-15 10:50:15.731007] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:09:59.740 [2024-05-15 10:50:15.731977] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:09:59.740 [2024-05-15 10:50:15.732005] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:09:59.740 [2024-05-15 10:50:15.732021] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:09:59.740 [2024-05-15 10:50:15.732988] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:09:59.740 [2024-05-15 10:50:15.733010] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:09:59.740 [2024-05-15 10:50:15.733996] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:09:59.740 [2024-05-15 10:50:15.734018] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:09:59.740 [2024-05-15 10:50:15.734028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:09:59.740 [2024-05-15 10:50:15.734040] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:09:59.740 [2024-05-15 10:50:15.734151] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:09:59.740 [2024-05-15 10:50:15.734159] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:09:59.740 [2024-05-15 10:50:15.734168] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:09:59.740 [2024-05-15 10:50:15.734999] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:09:59.740 [2024-05-15 10:50:15.736005] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:09:59.740 [2024-05-15 10:50:15.737014] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:59.740 [2024-05-15 10:50:15.738010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:09:59.740 [2024-05-15 10:50:15.738079] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:09:59.740 [2024-05-15 10:50:15.739032] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:09:59.740 [2024-05-15 10:50:15.739052] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:09:59.740 [2024-05-15 10:50:15.739061] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:09:59.740 [2024-05-15 10:50:15.739085] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:09:59.740 [2024-05-15 10:50:15.739099] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:09:59.740 [2024-05-15 10:50:15.739123] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:59.740 [2024-05-15 10:50:15.739132] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:59.740 [2024-05-15 10:50:15.739154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:59.740 [2024-05-15 10:50:15.747946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:09:59.740 [2024-05-15 10:50:15.747989] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:09:59.740 [2024-05-15 10:50:15.748000] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:09:59.740 [2024-05-15 10:50:15.748008] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:09:59.740 [2024-05-15 10:50:15.748016] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:09:59.740 [2024-05-15 10:50:15.748024] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:09:59.740 [2024-05-15 10:50:15.748033] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:09:59.740 [2024-05-15 10:50:15.748041] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.748060] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.748080] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.754954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.754979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.741 [2024-05-15 10:50:15.754992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.741 [2024-05-15 10:50:15.755005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.741 [2024-05-15 10:50:15.755017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:09:59.741 [2024-05-15 10:50:15.755025] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.755041] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.755056] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.763941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.763960] nvme_ctrlr.c:2891:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:09:59.741 [2024-05-15 10:50:15.763971] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.763983] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.763998] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.764014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.771943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.772008] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.772028] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.772043] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:09:59.741 [2024-05-15 10:50:15.772052] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:09:59.741 [2024-05-15 10:50:15.772062] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.779957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.779986] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:09:59.741 [2024-05-15 10:50:15.780007] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.780021] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.780033] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:59.741 [2024-05-15 10:50:15.780042] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:59.741 [2024-05-15 10:50:15.780052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.787947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.787986] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.788002] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.788031] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:09:59.741 [2024-05-15 10:50:15.788041] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:59.741 [2024-05-15 10:50:15.788051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.795954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.795982] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.795996] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.796012] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.796023] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.796032] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.796040] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:09:59.741 [2024-05-15 10:50:15.796048] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:09:59.741 [2024-05-15 10:50:15.796060] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:09:59.741 [2024-05-15 10:50:15.796089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.803959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.803996] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.811959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.811995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.819958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.819992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:59.741 [2024-05-15 10:50:15.827957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:09:59.741 [2024-05-15 10:50:15.827995] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:09:59.741 [2024-05-15 10:50:15.828005] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:09:59.741 [2024-05-15 10:50:15.828011] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:09:59.741 [2024-05-15 10:50:15.828017] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:09:59.741 [2024-05-15 10:50:15.828027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:09:59.741 [2024-05-15 10:50:15.828039] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:09:59.742 [2024-05-15 10:50:15.828047] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:09:59.742 [2024-05-15 10:50:15.828056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:09:59.742 [2024-05-15 10:50:15.828067] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:09:59.742 [2024-05-15 10:50:15.828076] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:09:59.742 [2024-05-15 10:50:15.828084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:09:59.742 [2024-05-15 10:50:15.828101] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:09:59.742 [2024-05-15 10:50:15.828110] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:09:59.742 [2024-05-15 10:50:15.828119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:09:59.742 [2024-05-15 10:50:15.835944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:09:59.742 [2024-05-15 10:50:15.835999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:09:59.742 [2024-05-15 10:50:15.836015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:09:59.742 [2024-05-15 10:50:15.836031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:09:59.742 ===================================================== 00:09:59.742 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:09:59.742 ===================================================== 00:09:59.742 Controller Capabilities/Features 00:09:59.742 ================================ 00:09:59.742 Vendor ID: 4e58 00:09:59.742 Subsystem Vendor ID: 4e58 00:09:59.742 Serial Number: SPDK2 00:09:59.742 Model Number: SPDK bdev Controller 00:09:59.742 Firmware Version: 24.05 00:09:59.742 Recommended Arb Burst: 6 00:09:59.742 IEEE OUI Identifier: 8d 6b 50 00:09:59.742 Multi-path I/O 00:09:59.742 May have multiple subsystem ports: Yes 00:09:59.742 May have multiple controllers: Yes 00:09:59.742 Associated with SR-IOV VF: No 00:09:59.742 Max Data Transfer Size: 131072 00:09:59.742 Max Number of Namespaces: 32 00:09:59.742 Max Number of I/O Queues: 127 00:09:59.742 NVMe Specification Version (VS): 1.3 00:09:59.742 NVMe Specification Version (Identify): 1.3 00:09:59.742 Maximum Queue Entries: 256 00:09:59.742 Contiguous Queues Required: Yes 00:09:59.742 Arbitration Mechanisms Supported 00:09:59.742 Weighted Round Robin: Not Supported 00:09:59.742 Vendor Specific: Not Supported 00:09:59.742 Reset Timeout: 15000 ms 00:09:59.742 Doorbell Stride: 4 bytes 00:09:59.742 NVM Subsystem Reset: Not Supported 00:09:59.742 Command Sets Supported 00:09:59.742 NVM Command Set: Supported 00:09:59.742 Boot Partition: Not Supported 00:09:59.742 Memory Page Size Minimum: 4096 bytes 00:09:59.742 Memory Page Size Maximum: 4096 bytes 00:09:59.742 Persistent Memory Region: Not Supported 00:09:59.742 Optional Asynchronous Events Supported 00:09:59.742 Namespace Attribute Notices: Supported 00:09:59.742 Firmware Activation Notices: Not Supported 00:09:59.742 ANA Change Notices: Not Supported 00:09:59.742 PLE Aggregate Log Change Notices: Not Supported 00:09:59.742 LBA Status Info Alert Notices: Not Supported 00:09:59.742 EGE Aggregate Log Change Notices: Not Supported 00:09:59.742 Normal NVM Subsystem Shutdown event: Not Supported 00:09:59.742 Zone Descriptor Change Notices: Not Supported 00:09:59.742 Discovery Log Change Notices: Not Supported 00:09:59.742 Controller Attributes 00:09:59.742 128-bit Host Identifier: Supported 00:09:59.742 Non-Operational Permissive Mode: Not Supported 00:09:59.742 NVM Sets: Not Supported 00:09:59.742 Read Recovery Levels: Not Supported 00:09:59.742 Endurance Groups: Not Supported 00:09:59.742 Predictable Latency Mode: Not Supported 00:09:59.742 Traffic Based Keep ALive: Not Supported 00:09:59.742 Namespace Granularity: Not Supported 00:09:59.742 SQ Associations: Not Supported 00:09:59.742 UUID List: Not Supported 00:09:59.742 Multi-Domain Subsystem: Not Supported 00:09:59.742 Fixed Capacity Management: Not Supported 00:09:59.742 Variable Capacity Management: Not Supported 00:09:59.742 Delete Endurance Group: Not Supported 00:09:59.742 Delete NVM Set: Not Supported 00:09:59.742 Extended LBA Formats Supported: Not Supported 00:09:59.742 Flexible Data Placement Supported: Not Supported 00:09:59.742 00:09:59.742 Controller Memory Buffer Support 00:09:59.742 ================================ 00:09:59.742 Supported: No 00:09:59.742 00:09:59.742 Persistent Memory Region Support 00:09:59.742 ================================ 00:09:59.742 Supported: No 00:09:59.742 00:09:59.742 Admin Command Set Attributes 00:09:59.742 ============================ 00:09:59.742 Security Send/Receive: Not Supported 00:09:59.742 Format NVM: Not Supported 00:09:59.742 Firmware Activate/Download: Not Supported 00:09:59.742 Namespace Management: Not Supported 00:09:59.742 Device Self-Test: Not Supported 00:09:59.742 Directives: Not Supported 00:09:59.742 NVMe-MI: Not Supported 00:09:59.742 Virtualization Management: Not Supported 00:09:59.742 Doorbell Buffer Config: Not Supported 00:09:59.742 Get LBA Status Capability: Not Supported 00:09:59.742 Command & Feature Lockdown Capability: Not Supported 00:09:59.742 Abort Command Limit: 4 00:09:59.742 Async Event Request Limit: 4 00:09:59.742 Number of Firmware Slots: N/A 00:09:59.742 Firmware Slot 1 Read-Only: N/A 00:09:59.742 Firmware Activation Without Reset: N/A 00:09:59.742 Multiple Update Detection Support: N/A 00:09:59.742 Firmware Update Granularity: No Information Provided 00:09:59.742 Per-Namespace SMART Log: No 00:09:59.742 Asymmetric Namespace Access Log Page: Not Supported 00:09:59.742 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:09:59.742 Command Effects Log Page: Supported 00:09:59.742 Get Log Page Extended Data: Supported 00:09:59.742 Telemetry Log Pages: Not Supported 00:09:59.742 Persistent Event Log Pages: Not Supported 00:09:59.742 Supported Log Pages Log Page: May Support 00:09:59.743 Commands Supported & Effects Log Page: Not Supported 00:09:59.743 Feature Identifiers & Effects Log Page:May Support 00:09:59.743 NVMe-MI Commands & Effects Log Page: May Support 00:09:59.743 Data Area 4 for Telemetry Log: Not Supported 00:09:59.743 Error Log Page Entries Supported: 128 00:09:59.743 Keep Alive: Supported 00:09:59.743 Keep Alive Granularity: 10000 ms 00:09:59.743 00:09:59.743 NVM Command Set Attributes 00:09:59.743 ========================== 00:09:59.743 Submission Queue Entry Size 00:09:59.743 Max: 64 00:09:59.743 Min: 64 00:09:59.743 Completion Queue Entry Size 00:09:59.743 Max: 16 00:09:59.743 Min: 16 00:09:59.743 Number of Namespaces: 32 00:09:59.743 Compare Command: Supported 00:09:59.743 Write Uncorrectable Command: Not Supported 00:09:59.743 Dataset Management Command: Supported 00:09:59.743 Write Zeroes Command: Supported 00:09:59.743 Set Features Save Field: Not Supported 00:09:59.743 Reservations: Not Supported 00:09:59.743 Timestamp: Not Supported 00:09:59.743 Copy: Supported 00:09:59.743 Volatile Write Cache: Present 00:09:59.743 Atomic Write Unit (Normal): 1 00:09:59.743 Atomic Write Unit (PFail): 1 00:09:59.743 Atomic Compare & Write Unit: 1 00:09:59.743 Fused Compare & Write: Supported 00:09:59.743 Scatter-Gather List 00:09:59.743 SGL Command Set: Supported (Dword aligned) 00:09:59.743 SGL Keyed: Not Supported 00:09:59.743 SGL Bit Bucket Descriptor: Not Supported 00:09:59.743 SGL Metadata Pointer: Not Supported 00:09:59.743 Oversized SGL: Not Supported 00:09:59.743 SGL Metadata Address: Not Supported 00:09:59.743 SGL Offset: Not Supported 00:09:59.743 Transport SGL Data Block: Not Supported 00:09:59.743 Replay Protected Memory Block: Not Supported 00:09:59.743 00:09:59.743 Firmware Slot Information 00:09:59.743 ========================= 00:09:59.743 Active slot: 1 00:09:59.743 Slot 1 Firmware Revision: 24.05 00:09:59.743 00:09:59.743 00:09:59.743 Commands Supported and Effects 00:09:59.743 ============================== 00:09:59.743 Admin Commands 00:09:59.743 -------------- 00:09:59.743 Get Log Page (02h): Supported 00:09:59.743 Identify (06h): Supported 00:09:59.743 Abort (08h): Supported 00:09:59.743 Set Features (09h): Supported 00:09:59.743 Get Features (0Ah): Supported 00:09:59.743 Asynchronous Event Request (0Ch): Supported 00:09:59.743 Keep Alive (18h): Supported 00:09:59.743 I/O Commands 00:09:59.743 ------------ 00:09:59.743 Flush (00h): Supported LBA-Change 00:09:59.743 Write (01h): Supported LBA-Change 00:09:59.743 Read (02h): Supported 00:09:59.743 Compare (05h): Supported 00:09:59.743 Write Zeroes (08h): Supported LBA-Change 00:09:59.743 Dataset Management (09h): Supported LBA-Change 00:09:59.743 Copy (19h): Supported LBA-Change 00:09:59.743 Unknown (79h): Supported LBA-Change 00:09:59.743 Unknown (7Ah): Supported 00:09:59.743 00:09:59.743 Error Log 00:09:59.743 ========= 00:09:59.743 00:09:59.743 Arbitration 00:09:59.743 =========== 00:09:59.743 Arbitration Burst: 1 00:09:59.743 00:09:59.743 Power Management 00:09:59.743 ================ 00:09:59.743 Number of Power States: 1 00:09:59.743 Current Power State: Power State #0 00:09:59.743 Power State #0: 00:09:59.743 Max Power: 0.00 W 00:09:59.743 Non-Operational State: Operational 00:09:59.743 Entry Latency: Not Reported 00:09:59.743 Exit Latency: Not Reported 00:09:59.743 Relative Read Throughput: 0 00:09:59.743 Relative Read Latency: 0 00:09:59.743 Relative Write Throughput: 0 00:09:59.743 Relative Write Latency: 0 00:09:59.743 Idle Power: Not Reported 00:09:59.743 Active Power: Not Reported 00:09:59.743 Non-Operational Permissive Mode: Not Supported 00:09:59.743 00:09:59.743 Health Information 00:09:59.743 ================== 00:09:59.743 Critical Warnings: 00:09:59.743 Available Spare Space: OK 00:09:59.743 Temperature: OK 00:09:59.743 Device Reliability: OK 00:09:59.743 Read Only: No 00:09:59.743 Volatile Memory Backup: OK 00:09:59.743 Current Temperature: 0 Kelvin (-2[2024-05-15 10:50:15.836150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:09:59.743 [2024-05-15 10:50:15.843943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:09:59.743 [2024-05-15 10:50:15.844009] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:09:59.743 [2024-05-15 10:50:15.844027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.743 [2024-05-15 10:50:15.844038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.743 [2024-05-15 10:50:15.844049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.743 [2024-05-15 10:50:15.844058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:59.743 [2024-05-15 10:50:15.844140] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:09:59.743 [2024-05-15 10:50:15.844162] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:09:59.743 [2024-05-15 10:50:15.845136] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:09:59.743 [2024-05-15 10:50:15.845207] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:09:59.743 [2024-05-15 10:50:15.845222] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:09:59.743 [2024-05-15 10:50:15.846140] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:09:59.743 [2024-05-15 10:50:15.846165] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:09:59.744 [2024-05-15 10:50:15.846217] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:09:59.744 [2024-05-15 10:50:15.847422] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:09:59.744 73 Celsius) 00:09:59.744 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:09:59.744 Available Spare: 0% 00:09:59.744 Available Spare Threshold: 0% 00:09:59.744 Life Percentage Used: 0% 00:09:59.744 Data Units Read: 0 00:09:59.744 Data Units Written: 0 00:09:59.744 Host Read Commands: 0 00:09:59.744 Host Write Commands: 0 00:09:59.744 Controller Busy Time: 0 minutes 00:09:59.744 Power Cycles: 0 00:09:59.744 Power On Hours: 0 hours 00:09:59.744 Unsafe Shutdowns: 0 00:09:59.744 Unrecoverable Media Errors: 0 00:09:59.744 Lifetime Error Log Entries: 0 00:09:59.744 Warning Temperature Time: 0 minutes 00:09:59.744 Critical Temperature Time: 0 minutes 00:09:59.744 00:09:59.744 Number of Queues 00:09:59.744 ================ 00:09:59.744 Number of I/O Submission Queues: 127 00:09:59.744 Number of I/O Completion Queues: 127 00:09:59.744 00:09:59.744 Active Namespaces 00:09:59.744 ================= 00:09:59.744 Namespace ID:1 00:09:59.744 Error Recovery Timeout: Unlimited 00:09:59.744 Command Set Identifier: NVM (00h) 00:09:59.744 Deallocate: Supported 00:09:59.744 Deallocated/Unwritten Error: Not Supported 00:09:59.744 Deallocated Read Value: Unknown 00:09:59.744 Deallocate in Write Zeroes: Not Supported 00:09:59.744 Deallocated Guard Field: 0xFFFF 00:09:59.744 Flush: Supported 00:09:59.744 Reservation: Supported 00:09:59.744 Namespace Sharing Capabilities: Multiple Controllers 00:09:59.744 Size (in LBAs): 131072 (0GiB) 00:09:59.744 Capacity (in LBAs): 131072 (0GiB) 00:09:59.744 Utilization (in LBAs): 131072 (0GiB) 00:09:59.744 NGUID: 27D5C999BB18475385A1885135EC726C 00:09:59.744 UUID: 27d5c999-bb18-4753-85a1-885135ec726c 00:09:59.744 Thin Provisioning: Not Supported 00:09:59.744 Per-NS Atomic Units: Yes 00:09:59.744 Atomic Boundary Size (Normal): 0 00:09:59.744 Atomic Boundary Size (PFail): 0 00:09:59.744 Atomic Boundary Offset: 0 00:09:59.744 Maximum Single Source Range Length: 65535 00:09:59.744 Maximum Copy Length: 65535 00:09:59.744 Maximum Source Range Count: 1 00:09:59.744 NGUID/EUI64 Never Reused: No 00:09:59.744 Namespace Write Protected: No 00:09:59.744 Number of LBA Formats: 1 00:09:59.744 Current LBA Format: LBA Format #00 00:09:59.744 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:59.744 00:09:59.744 10:50:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:09:59.744 EAL: No free 2048 kB hugepages reported on node 1 00:10:00.002 [2024-05-15 10:50:16.067924] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:05.309 Initializing NVMe Controllers 00:10:05.309 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:05.309 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:05.309 Initialization complete. Launching workers. 00:10:05.309 ======================================================== 00:10:05.309 Latency(us) 00:10:05.309 Device Information : IOPS MiB/s Average min max 00:10:05.309 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33156.19 129.52 3860.13 1191.61 9005.43 00:10:05.309 ======================================================== 00:10:05.309 Total : 33156.19 129.52 3860.13 1191.61 9005.43 00:10:05.309 00:10:05.309 [2024-05-15 10:50:21.173299] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:05.309 10:50:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:10:05.309 EAL: No free 2048 kB hugepages reported on node 1 00:10:05.309 [2024-05-15 10:50:21.408941] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:10.565 Initializing NVMe Controllers 00:10:10.565 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:10.565 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:10:10.565 Initialization complete. Launching workers. 00:10:10.565 ======================================================== 00:10:10.565 Latency(us) 00:10:10.565 Device Information : IOPS MiB/s Average min max 00:10:10.565 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31668.99 123.71 4043.78 1202.62 9821.11 00:10:10.565 ======================================================== 00:10:10.565 Total : 31668.99 123.71 4043.78 1202.62 9821.11 00:10:10.565 00:10:10.565 [2024-05-15 10:50:26.431015] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:10.565 10:50:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:10:10.565 EAL: No free 2048 kB hugepages reported on node 1 00:10:10.565 [2024-05-15 10:50:26.667348] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:15.829 [2024-05-15 10:50:31.808075] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:15.829 Initializing NVMe Controllers 00:10:15.829 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:15.829 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:10:15.829 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:10:15.829 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:10:15.829 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:10:15.829 Initialization complete. Launching workers. 00:10:15.829 Starting thread on core 2 00:10:15.829 Starting thread on core 3 00:10:15.829 Starting thread on core 1 00:10:15.829 10:50:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:10:15.829 EAL: No free 2048 kB hugepages reported on node 1 00:10:16.088 [2024-05-15 10:50:32.127487] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:19.374 [2024-05-15 10:50:35.209917] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:19.374 Initializing NVMe Controllers 00:10:19.374 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:19.374 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:19.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:10:19.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:10:19.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:10:19.374 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:10:19.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:10:19.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:10:19.374 Initialization complete. Launching workers. 00:10:19.374 Starting thread on core 1 with urgent priority queue 00:10:19.374 Starting thread on core 2 with urgent priority queue 00:10:19.374 Starting thread on core 3 with urgent priority queue 00:10:19.374 Starting thread on core 0 with urgent priority queue 00:10:19.374 SPDK bdev Controller (SPDK2 ) core 0: 4374.00 IO/s 22.86 secs/100000 ios 00:10:19.374 SPDK bdev Controller (SPDK2 ) core 1: 5766.33 IO/s 17.34 secs/100000 ios 00:10:19.374 SPDK bdev Controller (SPDK2 ) core 2: 5742.67 IO/s 17.41 secs/100000 ios 00:10:19.374 SPDK bdev Controller (SPDK2 ) core 3: 5689.00 IO/s 17.58 secs/100000 ios 00:10:19.374 ======================================================== 00:10:19.374 00:10:19.375 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:19.375 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.375 [2024-05-15 10:50:35.524412] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:19.375 Initializing NVMe Controllers 00:10:19.375 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:19.375 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:19.375 Namespace ID: 1 size: 0GB 00:10:19.375 Initialization complete. 00:10:19.375 INFO: using host memory buffer for IO 00:10:19.375 Hello world! 00:10:19.375 [2024-05-15 10:50:35.535469] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:19.375 10:50:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:10:19.633 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.633 [2024-05-15 10:50:35.837846] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:21.011 Initializing NVMe Controllers 00:10:21.011 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:21.011 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:21.011 Initialization complete. Launching workers. 00:10:21.011 submit (in ns) avg, min, max = 6940.5, 3497.8, 4015814.4 00:10:21.011 complete (in ns) avg, min, max = 28187.8, 2057.8, 4026970.0 00:10:21.011 00:10:21.011 Submit histogram 00:10:21.011 ================ 00:10:21.011 Range in us Cumulative Count 00:10:21.011 3.484 - 3.508: 0.0302% ( 4) 00:10:21.011 3.508 - 3.532: 0.2419% ( 28) 00:10:21.011 3.532 - 3.556: 1.1644% ( 122) 00:10:21.011 3.556 - 3.579: 3.4326% ( 300) 00:10:21.011 3.579 - 3.603: 8.5589% ( 678) 00:10:21.011 3.603 - 3.627: 14.5017% ( 786) 00:10:21.011 3.627 - 3.650: 25.8733% ( 1504) 00:10:21.011 3.650 - 3.674: 34.3414% ( 1120) 00:10:21.011 3.674 - 3.698: 43.5581% ( 1219) 00:10:21.011 3.698 - 3.721: 49.4178% ( 775) 00:10:21.011 3.721 - 3.745: 54.4458% ( 665) 00:10:21.011 3.745 - 3.769: 57.9616% ( 465) 00:10:21.011 3.769 - 3.793: 61.8025% ( 508) 00:10:21.011 3.793 - 3.816: 64.8873% ( 408) 00:10:21.011 3.816 - 3.840: 68.0629% ( 420) 00:10:21.011 3.840 - 3.864: 72.4936% ( 586) 00:10:21.011 3.864 - 3.887: 76.9091% ( 584) 00:10:21.011 3.887 - 3.911: 80.9693% ( 537) 00:10:21.011 3.911 - 3.935: 84.0919% ( 413) 00:10:21.011 3.935 - 3.959: 86.3678% ( 301) 00:10:21.011 3.959 - 3.982: 88.2731% ( 252) 00:10:21.011 3.982 - 4.006: 89.8155% ( 204) 00:10:21.011 4.006 - 4.030: 90.9496% ( 150) 00:10:21.011 4.030 - 4.053: 91.9250% ( 129) 00:10:21.011 4.053 - 4.077: 92.8096% ( 117) 00:10:21.011 4.077 - 4.101: 93.6942% ( 117) 00:10:21.011 4.101 - 4.124: 94.5033% ( 107) 00:10:21.011 4.124 - 4.148: 95.0174% ( 68) 00:10:21.011 4.148 - 4.172: 95.5769% ( 74) 00:10:21.011 4.172 - 4.196: 95.9776% ( 53) 00:10:21.011 4.196 - 4.219: 96.3405% ( 48) 00:10:21.011 4.219 - 4.243: 96.6430% ( 40) 00:10:21.011 4.243 - 4.267: 96.8244% ( 24) 00:10:21.011 4.267 - 4.290: 97.0361% ( 28) 00:10:21.011 4.290 - 4.314: 97.1647% ( 17) 00:10:21.011 4.314 - 4.338: 97.2781% ( 15) 00:10:21.011 4.338 - 4.361: 97.4066% ( 17) 00:10:21.011 4.361 - 4.385: 97.5125% ( 14) 00:10:21.011 4.385 - 4.409: 97.5730% ( 8) 00:10:21.011 4.409 - 4.433: 97.6259% ( 7) 00:10:21.011 4.433 - 4.456: 97.7015% ( 10) 00:10:21.011 4.456 - 4.480: 97.7393% ( 5) 00:10:21.011 4.480 - 4.504: 97.7998% ( 8) 00:10:21.011 4.504 - 4.527: 97.8300% ( 4) 00:10:21.011 4.527 - 4.551: 97.8527% ( 3) 00:10:21.011 4.551 - 4.575: 97.8603% ( 1) 00:10:21.011 4.575 - 4.599: 97.8830% ( 3) 00:10:21.011 4.622 - 4.646: 97.9056% ( 3) 00:10:21.011 4.646 - 4.670: 97.9132% ( 1) 00:10:21.011 4.670 - 4.693: 97.9208% ( 1) 00:10:21.011 4.717 - 4.741: 97.9283% ( 1) 00:10:21.011 4.741 - 4.764: 97.9359% ( 1) 00:10:21.011 4.788 - 4.812: 97.9510% ( 2) 00:10:21.011 4.812 - 4.836: 97.9737% ( 3) 00:10:21.011 4.836 - 4.859: 98.0191% ( 6) 00:10:21.011 4.859 - 4.883: 98.0569% ( 5) 00:10:21.011 4.883 - 4.907: 98.0644% ( 1) 00:10:21.011 4.907 - 4.930: 98.1173% ( 7) 00:10:21.011 4.930 - 4.954: 98.1551% ( 5) 00:10:21.011 4.954 - 4.978: 98.1778% ( 3) 00:10:21.011 4.978 - 5.001: 98.2005% ( 3) 00:10:21.011 5.001 - 5.025: 98.2383% ( 5) 00:10:21.011 5.025 - 5.049: 98.2534% ( 2) 00:10:21.011 5.049 - 5.073: 98.2686% ( 2) 00:10:21.011 5.073 - 5.096: 98.3290% ( 8) 00:10:21.011 5.096 - 5.120: 98.3895% ( 8) 00:10:21.011 5.120 - 5.144: 98.4651% ( 10) 00:10:21.011 5.144 - 5.167: 98.4803% ( 2) 00:10:21.011 5.167 - 5.191: 98.4878% ( 1) 00:10:21.011 5.191 - 5.215: 98.5029% ( 2) 00:10:21.011 5.215 - 5.239: 98.5256% ( 3) 00:10:21.011 5.262 - 5.286: 98.5408% ( 2) 00:10:21.011 5.286 - 5.310: 98.5634% ( 3) 00:10:21.011 5.310 - 5.333: 98.5861% ( 3) 00:10:21.011 5.333 - 5.357: 98.6088% ( 3) 00:10:21.011 5.357 - 5.381: 98.6315% ( 3) 00:10:21.011 5.381 - 5.404: 98.6390% ( 1) 00:10:21.011 5.404 - 5.428: 98.6542% ( 2) 00:10:21.011 5.428 - 5.452: 98.6617% ( 1) 00:10:21.011 5.547 - 5.570: 98.6693% ( 1) 00:10:21.011 5.641 - 5.665: 98.6844% ( 2) 00:10:21.011 5.736 - 5.760: 98.6920% ( 1) 00:10:21.011 5.831 - 5.855: 98.6995% ( 1) 00:10:21.011 5.879 - 5.902: 98.7071% ( 1) 00:10:21.011 5.902 - 5.926: 98.7147% ( 1) 00:10:21.011 5.973 - 5.997: 98.7222% ( 1) 00:10:21.011 5.997 - 6.021: 98.7298% ( 1) 00:10:21.011 6.068 - 6.116: 98.7373% ( 1) 00:10:21.011 6.116 - 6.163: 98.7525% ( 2) 00:10:21.011 6.258 - 6.305: 98.7600% ( 1) 00:10:21.011 6.305 - 6.353: 98.7676% ( 1) 00:10:21.011 6.353 - 6.400: 98.7751% ( 1) 00:10:21.011 6.400 - 6.447: 98.7827% ( 1) 00:10:21.011 6.542 - 6.590: 98.7903% ( 1) 00:10:21.011 6.590 - 6.637: 98.8054% ( 2) 00:10:21.011 6.684 - 6.732: 98.8129% ( 1) 00:10:21.011 6.732 - 6.779: 98.8281% ( 2) 00:10:21.011 6.779 - 6.827: 98.8356% ( 1) 00:10:21.011 7.064 - 7.111: 98.8583% ( 3) 00:10:21.011 7.253 - 7.301: 98.8659% ( 1) 00:10:21.011 7.301 - 7.348: 98.8810% ( 2) 00:10:21.011 7.348 - 7.396: 98.8886% ( 1) 00:10:21.011 7.490 - 7.538: 98.8961% ( 1) 00:10:21.011 7.538 - 7.585: 98.9037% ( 1) 00:10:21.011 7.680 - 7.727: 98.9339% ( 4) 00:10:21.011 7.822 - 7.870: 98.9415% ( 1) 00:10:21.011 7.964 - 8.012: 98.9490% ( 1) 00:10:21.011 8.012 - 8.059: 98.9566% ( 1) 00:10:21.011 8.059 - 8.107: 98.9642% ( 1) 00:10:21.011 8.249 - 8.296: 98.9868% ( 3) 00:10:21.011 8.486 - 8.533: 98.9944% ( 1) 00:10:21.011 8.581 - 8.628: 99.0171% ( 3) 00:10:21.011 8.676 - 8.723: 99.0246% ( 1) 00:10:21.011 8.770 - 8.818: 99.0322% ( 1) 00:10:21.011 8.865 - 8.913: 99.0398% ( 1) 00:10:21.012 9.244 - 9.292: 99.0473% ( 1) 00:10:21.012 9.481 - 9.529: 99.0549% ( 1) 00:10:21.012 9.529 - 9.576: 99.0700% ( 2) 00:10:21.012 9.861 - 9.908: 99.0776% ( 1) 00:10:21.012 10.145 - 10.193: 99.0851% ( 1) 00:10:21.012 10.335 - 10.382: 99.0927% ( 1) 00:10:21.012 10.619 - 10.667: 99.1003% ( 1) 00:10:21.012 10.714 - 10.761: 99.1078% ( 1) 00:10:21.012 10.856 - 10.904: 99.1154% ( 1) 00:10:21.012 11.046 - 11.093: 99.1229% ( 1) 00:10:21.012 11.141 - 11.188: 99.1305% ( 1) 00:10:21.012 11.283 - 11.330: 99.1381% ( 1) 00:10:21.012 11.710 - 11.757: 99.1456% ( 1) 00:10:21.012 12.231 - 12.326: 99.1532% ( 1) 00:10:21.012 12.610 - 12.705: 99.1607% ( 1) 00:10:21.012 12.800 - 12.895: 99.1683% ( 1) 00:10:21.012 13.084 - 13.179: 99.1759% ( 1) 00:10:21.012 13.274 - 13.369: 99.1834% ( 1) 00:10:21.012 13.464 - 13.559: 99.1910% ( 1) 00:10:21.012 14.601 - 14.696: 99.1985% ( 1) 00:10:21.012 14.791 - 14.886: 99.2061% ( 1) 00:10:21.012 15.455 - 15.550: 99.2137% ( 1) 00:10:21.012 17.161 - 17.256: 99.2212% ( 1) 00:10:21.012 17.256 - 17.351: 99.2439% ( 3) 00:10:21.012 17.446 - 17.541: 99.2590% ( 2) 00:10:21.012 17.541 - 17.636: 99.2968% ( 5) 00:10:21.012 17.636 - 17.730: 99.3120% ( 2) 00:10:21.012 17.730 - 17.825: 99.3271% ( 2) 00:10:21.012 17.825 - 17.920: 99.3346% ( 1) 00:10:21.012 17.920 - 18.015: 99.3422% ( 1) 00:10:21.012 18.015 - 18.110: 99.3800% ( 5) 00:10:21.012 18.110 - 18.204: 99.4178% ( 5) 00:10:21.012 18.204 - 18.299: 99.4859% ( 9) 00:10:21.012 18.299 - 18.394: 99.5615% ( 10) 00:10:21.012 18.394 - 18.489: 99.6068% ( 6) 00:10:21.012 18.489 - 18.584: 99.6371% ( 4) 00:10:21.012 18.584 - 18.679: 99.7051% ( 9) 00:10:21.012 18.679 - 18.773: 99.7581% ( 7) 00:10:21.012 18.773 - 18.868: 99.7732% ( 2) 00:10:21.012 18.868 - 18.963: 99.7883% ( 2) 00:10:21.012 18.963 - 19.058: 99.7959% ( 1) 00:10:21.012 19.058 - 19.153: 99.8337% ( 5) 00:10:21.012 19.153 - 19.247: 99.8639% ( 4) 00:10:21.012 19.342 - 19.437: 99.8790% ( 2) 00:10:21.012 19.721 - 19.816: 99.8866% ( 1) 00:10:21.012 20.196 - 20.290: 99.8941% ( 1) 00:10:21.012 21.902 - 21.997: 99.9017% ( 1) 00:10:21.012 26.927 - 27.117: 99.9093% ( 1) 00:10:21.012 29.772 - 29.961: 99.9168% ( 1) 00:10:21.012 30.341 - 30.530: 99.9244% ( 1) 00:10:21.012 3980.705 - 4004.978: 99.9546% ( 4) 00:10:21.012 4004.978 - 4029.250: 100.0000% ( 6) 00:10:21.012 00:10:21.012 Complete histogram 00:10:21.012 ================== 00:10:21.012 Range in us Cumulative Count 00:10:21.012 2.050 - 2.062: 0.0605% ( 8) 00:10:21.012 2.062 - 2.074: 22.6599% ( 2989) 00:10:21.012 2.074 - 2.086: 50.1059% ( 3630) 00:10:21.012 2.086 - 2.098: 51.4139% ( 173) 00:10:21.012 2.098 - 2.110: 56.0941% ( 619) 00:10:21.012 2.110 - 2.121: 59.0504% ( 391) 00:10:21.012 2.121 - 2.133: 61.3564% ( 305) 00:10:21.012 2.133 - 2.145: 73.2572% ( 1574) 00:10:21.012 2.145 - 2.157: 78.4969% ( 693) 00:10:21.012 2.157 - 2.169: 79.1774% ( 90) 00:10:21.012 2.169 - 2.181: 80.9693% ( 237) 00:10:21.012 2.181 - 2.193: 81.9220% ( 126) 00:10:21.012 2.193 - 2.204: 82.8444% ( 122) 00:10:21.012 2.204 - 2.216: 88.8553% ( 795) 00:10:21.012 2.216 - 2.228: 91.8494% ( 396) 00:10:21.012 2.228 - 2.240: 92.0762% ( 30) 00:10:21.012 2.240 - 2.252: 92.8928% ( 108) 00:10:21.012 2.252 - 2.264: 93.3011% ( 54) 00:10:21.012 2.264 - 2.276: 93.6640% ( 48) 00:10:21.012 2.276 - 2.287: 94.6545% ( 131) 00:10:21.012 2.287 - 2.299: 95.1232% ( 62) 00:10:21.012 2.299 - 2.311: 95.1837% ( 8) 00:10:21.012 2.311 - 2.323: 95.2745% ( 12) 00:10:21.012 2.323 - 2.335: 95.3501% ( 10) 00:10:21.012 2.335 - 2.347: 95.4030% ( 7) 00:10:21.012 2.347 - 2.359: 95.5996% ( 26) 00:10:21.012 2.359 - 2.370: 95.9549% ( 47) 00:10:21.012 2.370 - 2.382: 96.2498% ( 39) 00:10:21.012 2.382 - 2.394: 96.5069% ( 34) 00:10:21.012 2.394 - 2.406: 96.9076% ( 53) 00:10:21.012 2.406 - 2.418: 97.1798% ( 36) 00:10:21.012 2.418 - 2.430: 97.3991% ( 29) 00:10:21.012 2.430 - 2.441: 97.6183% ( 29) 00:10:21.012 2.441 - 2.453: 97.7393% ( 16) 00:10:21.012 2.453 - 2.465: 97.8905% ( 20) 00:10:21.012 2.465 - 2.477: 98.0417% ( 20) 00:10:21.012 2.477 - 2.489: 98.1778% ( 18) 00:10:21.012 2.489 - 2.501: 98.2837% ( 14) 00:10:21.012 2.501 - 2.513: 98.3290% ( 6) 00:10:21.012 2.513 - 2.524: 98.3744% ( 6) 00:10:21.012 2.524 - 2.536: 98.4198% ( 6) 00:10:21.012 2.536 - 2.548: 98.4273% ( 1) 00:10:21.012 2.548 - 2.560: 98.4576% ( 4) 00:10:21.012 2.560 - 2.572: 98.4727% ( 2) 00:10:21.012 2.596 - 2.607: 98.4803% ( 1) 00:10:21.012 2.607 - 2.619: 98.4878% ( 1) 00:10:21.012 2.619 - 2.631: 98.5029% ( 2) 00:10:21.012 2.655 - 2.667: 98.5181% ( 2) 00:10:21.012 2.667 - 2.679: 98.5256% ( 1) 00:10:21.012 2.702 - 2.714: 98.5332% ( 1) 00:10:21.012 2.714 - 2.726: 98.5408% ( 1) 00:10:21.012 2.726 - 2.738: 98.5483% ( 1) 00:10:21.012 2.797 - 2.809: 98.5559% ( 1) 00:10:21.012 2.809 - 2.821: 98.5634% ( 1) 00:10:21.012 2.916 - 2.927: 98.5786% ( 2) 00:10:21.012 2.987 - 2.999: 98.5861% ( 1) 00:10:21.012 3.200 - 3.224: 98.5937% ( 1) 00:10:21.012 3.271 - 3.295: 98.6012% ( 1) 00:10:21.012 3.342 - 3.366: 98.6164% ( 2) 00:10:21.012 3.390 - 3.413: 98.6390% ( 3) 00:10:21.012 3.461 - 3.484: 98.6617% ( 3) 00:10:21.012 3.484 - 3.508: 98.6768% ( 2) 00:10:21.012 3.508 - 3.532: 98.6844% ( 1) 00:10:21.012 3.532 - 3.556: 98.6920% ( 1) 00:10:21.012 3.556 - 3.579: 98.6995% ( 1) 00:10:21.012 3.579 - 3.603: 98.7071% ( 1) 00:10:21.012 3.627 - 3.650: 98.7222% ( 2) 00:10:21.012 3.674 - 3.698: 98.7373% ( 2) 00:10:21.012 3.698 - 3.721: 98.7449% ( 1) 00:10:21.012 3.840 - 3.864: 98.7600% ( 2) 00:10:21.012 5.286 - 5.310: 98.7676% ( 1) 00:10:21.012 5.310 - 5.333: 98.7751% ( 1) 00:10:21.012 5.333 - 5.357: 98.7827% ( 1) 00:10:21.012 5.547 - 5.570: 98.7903% ( 1) 00:10:21.012 5.594 - 5.618: 98.7978% ( 1) 00:10:21.012 5.879 - 5.902: 98.8054% ( 1) 00:10:21.012 6.116 - 6.163: 98.8205% ( 2) 00:10:21.012 6.163 - 6.210: 98.8281% ( 1) 00:10:21.012 6.447 - 6.495: 98.8356% ( 1) 00:10:21.012 6.827 - 6.874: 98.8432% ( 1) 00:10:21.012 6.969 - 7.016: 98.8507% ( 1) 00:10:21.013 7.016 - 7.064: 98.8583% ( 1) 00:10:21.013 7.064 - 7.111: 98.8659% ( 1) 00:10:21.013 7.443 - 7.490: 98.8734% ( 1) 00:10:21.013 7.775 - 7.822: 9[2024-05-15 10:50:36.933690] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:21.013 8.8810% ( 1) 00:10:21.013 8.107 - 8.154: 98.8886% ( 1) 00:10:21.013 8.960 - 9.007: 98.8961% ( 1) 00:10:21.013 9.387 - 9.434: 98.9037% ( 1) 00:10:21.013 15.739 - 15.834: 98.9264% ( 3) 00:10:21.013 15.834 - 15.929: 98.9490% ( 3) 00:10:21.013 15.929 - 16.024: 98.9566% ( 1) 00:10:21.013 16.024 - 16.119: 98.9793% ( 3) 00:10:21.013 16.119 - 16.213: 99.0020% ( 3) 00:10:21.013 16.213 - 16.308: 99.0171% ( 2) 00:10:21.013 16.308 - 16.403: 99.0398% ( 3) 00:10:21.013 16.403 - 16.498: 99.0549% ( 2) 00:10:21.013 16.498 - 16.593: 99.1003% ( 6) 00:10:21.013 16.593 - 16.687: 99.1305% ( 4) 00:10:21.013 16.687 - 16.782: 99.2061% ( 10) 00:10:21.013 16.782 - 16.877: 99.2439% ( 5) 00:10:21.013 16.877 - 16.972: 99.2742% ( 4) 00:10:21.013 16.972 - 17.067: 99.2817% ( 1) 00:10:21.013 17.067 - 17.161: 99.2968% ( 2) 00:10:21.013 17.161 - 17.256: 99.3044% ( 1) 00:10:21.013 17.256 - 17.351: 99.3195% ( 2) 00:10:21.013 17.541 - 17.636: 99.3271% ( 1) 00:10:21.013 17.920 - 18.015: 99.3346% ( 1) 00:10:21.013 18.015 - 18.110: 99.3422% ( 1) 00:10:21.013 18.204 - 18.299: 99.3498% ( 1) 00:10:21.013 3470.981 - 3495.253: 99.3573% ( 1) 00:10:21.013 3980.705 - 4004.978: 99.7807% ( 56) 00:10:21.013 4004.978 - 4029.250: 100.0000% ( 29) 00:10:21.013 00:10:21.013 10:50:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:10:21.013 10:50:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:10:21.013 10:50:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:10:21.013 10:50:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:10:21.013 10:50:36 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:21.013 [ 00:10:21.013 { 00:10:21.013 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:21.013 "subtype": "Discovery", 00:10:21.013 "listen_addresses": [], 00:10:21.013 "allow_any_host": true, 00:10:21.013 "hosts": [] 00:10:21.013 }, 00:10:21.013 { 00:10:21.013 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:21.013 "subtype": "NVMe", 00:10:21.013 "listen_addresses": [ 00:10:21.013 { 00:10:21.013 "trtype": "VFIOUSER", 00:10:21.013 "adrfam": "IPv4", 00:10:21.013 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:21.013 "trsvcid": "0" 00:10:21.013 } 00:10:21.013 ], 00:10:21.013 "allow_any_host": true, 00:10:21.013 "hosts": [], 00:10:21.013 "serial_number": "SPDK1", 00:10:21.013 "model_number": "SPDK bdev Controller", 00:10:21.013 "max_namespaces": 32, 00:10:21.013 "min_cntlid": 1, 00:10:21.013 "max_cntlid": 65519, 00:10:21.013 "namespaces": [ 00:10:21.013 { 00:10:21.013 "nsid": 1, 00:10:21.013 "bdev_name": "Malloc1", 00:10:21.013 "name": "Malloc1", 00:10:21.013 "nguid": "AA3777C58ECB4C468FE3A32780233EC6", 00:10:21.013 "uuid": "aa3777c5-8ecb-4c46-8fe3-a32780233ec6" 00:10:21.013 }, 00:10:21.013 { 00:10:21.013 "nsid": 2, 00:10:21.013 "bdev_name": "Malloc3", 00:10:21.013 "name": "Malloc3", 00:10:21.013 "nguid": "7AB187B2387C4E8E81579D945BADF1C4", 00:10:21.013 "uuid": "7ab187b2-387c-4e8e-8157-9d945badf1c4" 00:10:21.013 } 00:10:21.013 ] 00:10:21.013 }, 00:10:21.013 { 00:10:21.013 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:21.013 "subtype": "NVMe", 00:10:21.013 "listen_addresses": [ 00:10:21.013 { 00:10:21.013 "trtype": "VFIOUSER", 00:10:21.013 "adrfam": "IPv4", 00:10:21.013 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:21.013 "trsvcid": "0" 00:10:21.013 } 00:10:21.013 ], 00:10:21.013 "allow_any_host": true, 00:10:21.013 "hosts": [], 00:10:21.013 "serial_number": "SPDK2", 00:10:21.013 "model_number": "SPDK bdev Controller", 00:10:21.013 "max_namespaces": 32, 00:10:21.013 "min_cntlid": 1, 00:10:21.013 "max_cntlid": 65519, 00:10:21.013 "namespaces": [ 00:10:21.013 { 00:10:21.013 "nsid": 1, 00:10:21.013 "bdev_name": "Malloc2", 00:10:21.013 "name": "Malloc2", 00:10:21.013 "nguid": "27D5C999BB18475385A1885135EC726C", 00:10:21.013 "uuid": "27d5c999-bb18-4753-85a1-885135ec726c" 00:10:21.013 } 00:10:21.013 ] 00:10:21.013 } 00:10:21.013 ] 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2747077 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1261 -- # local i=0 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # return 0 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:10:21.013 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:10:21.272 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.272 [2024-05-15 10:50:37.385027] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:10:21.272 Malloc4 00:10:21.272 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:10:21.530 [2024-05-15 10:50:37.723503] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:10:21.531 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:10:21.789 Asynchronous Event Request test 00:10:21.789 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:10:21.789 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:10:21.789 Registering asynchronous event callbacks... 00:10:21.789 Starting namespace attribute notice tests for all controllers... 00:10:21.789 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:10:21.789 aer_cb - Changed Namespace 00:10:21.789 Cleaning up... 00:10:21.789 [ 00:10:21.789 { 00:10:21.789 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:21.789 "subtype": "Discovery", 00:10:21.789 "listen_addresses": [], 00:10:21.789 "allow_any_host": true, 00:10:21.789 "hosts": [] 00:10:21.789 }, 00:10:21.789 { 00:10:21.789 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:10:21.789 "subtype": "NVMe", 00:10:21.789 "listen_addresses": [ 00:10:21.789 { 00:10:21.789 "trtype": "VFIOUSER", 00:10:21.789 "adrfam": "IPv4", 00:10:21.789 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:10:21.789 "trsvcid": "0" 00:10:21.789 } 00:10:21.789 ], 00:10:21.789 "allow_any_host": true, 00:10:21.789 "hosts": [], 00:10:21.789 "serial_number": "SPDK1", 00:10:21.789 "model_number": "SPDK bdev Controller", 00:10:21.789 "max_namespaces": 32, 00:10:21.789 "min_cntlid": 1, 00:10:21.789 "max_cntlid": 65519, 00:10:21.789 "namespaces": [ 00:10:21.789 { 00:10:21.789 "nsid": 1, 00:10:21.789 "bdev_name": "Malloc1", 00:10:21.789 "name": "Malloc1", 00:10:21.789 "nguid": "AA3777C58ECB4C468FE3A32780233EC6", 00:10:21.789 "uuid": "aa3777c5-8ecb-4c46-8fe3-a32780233ec6" 00:10:21.789 }, 00:10:21.789 { 00:10:21.789 "nsid": 2, 00:10:21.789 "bdev_name": "Malloc3", 00:10:21.789 "name": "Malloc3", 00:10:21.789 "nguid": "7AB187B2387C4E8E81579D945BADF1C4", 00:10:21.789 "uuid": "7ab187b2-387c-4e8e-8157-9d945badf1c4" 00:10:21.789 } 00:10:21.789 ] 00:10:21.789 }, 00:10:21.789 { 00:10:21.789 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:10:21.789 "subtype": "NVMe", 00:10:21.789 "listen_addresses": [ 00:10:21.790 { 00:10:21.790 "trtype": "VFIOUSER", 00:10:21.790 "adrfam": "IPv4", 00:10:21.790 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:10:21.790 "trsvcid": "0" 00:10:21.790 } 00:10:21.790 ], 00:10:21.790 "allow_any_host": true, 00:10:21.790 "hosts": [], 00:10:21.790 "serial_number": "SPDK2", 00:10:21.790 "model_number": "SPDK bdev Controller", 00:10:21.790 "max_namespaces": 32, 00:10:21.790 "min_cntlid": 1, 00:10:21.790 "max_cntlid": 65519, 00:10:21.790 "namespaces": [ 00:10:21.790 { 00:10:21.790 "nsid": 1, 00:10:21.790 "bdev_name": "Malloc2", 00:10:21.790 "name": "Malloc2", 00:10:21.790 "nguid": "27D5C999BB18475385A1885135EC726C", 00:10:21.790 "uuid": "27d5c999-bb18-4753-85a1-885135ec726c" 00:10:21.790 }, 00:10:21.790 { 00:10:21.790 "nsid": 2, 00:10:21.790 "bdev_name": "Malloc4", 00:10:21.790 "name": "Malloc4", 00:10:21.790 "nguid": "E60B372CED904A44B68E150E4FC05245", 00:10:21.790 "uuid": "e60b372c-ed90-4a44-b68e-150e4fc05245" 00:10:21.790 } 00:10:21.790 ] 00:10:21.790 } 00:10:21.790 ] 00:10:21.790 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2747077 00:10:21.790 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:10:21.790 10:50:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2741476 00:10:21.790 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2741476 ']' 00:10:21.790 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2741476 00:10:21.790 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:21.790 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:21.790 10:50:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2741476 00:10:21.790 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:21.790 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:21.790 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2741476' 00:10:21.790 killing process with pid 2741476 00:10:21.790 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2741476 00:10:21.790 [2024-05-15 10:50:38.010468] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:21.790 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2741476 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2747227 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2747227' 00:10:22.358 Process pid: 2747227 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2747227 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@827 -- # '[' -z 2747227 ']' 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:22.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:22.358 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:22.358 [2024-05-15 10:50:38.446893] thread.c:2937:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:10:22.358 [2024-05-15 10:50:38.447890] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:22.358 [2024-05-15 10:50:38.447967] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:22.358 EAL: No free 2048 kB hugepages reported on node 1 00:10:22.358 [2024-05-15 10:50:38.515606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:22.617 [2024-05-15 10:50:38.628033] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:22.617 [2024-05-15 10:50:38.628084] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:22.617 [2024-05-15 10:50:38.628100] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:22.617 [2024-05-15 10:50:38.628113] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:22.617 [2024-05-15 10:50:38.628124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:22.617 [2024-05-15 10:50:38.628207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:22.617 [2024-05-15 10:50:38.628266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:22.617 [2024-05-15 10:50:38.628380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:22.617 [2024-05-15 10:50:38.628384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.617 [2024-05-15 10:50:38.732649] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:10:22.617 [2024-05-15 10:50:38.732899] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:10:22.617 [2024-05-15 10:50:38.733174] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:10:22.617 [2024-05-15 10:50:38.733805] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:10:22.617 [2024-05-15 10:50:38.734064] thread.c:2095:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:10:22.617 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:22.617 10:50:38 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@860 -- # return 0 00:10:22.618 10:50:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:10:23.551 10:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:10:23.809 10:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:10:23.809 10:50:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:10:23.809 10:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:23.809 10:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:10:23.809 10:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:10:24.068 Malloc1 00:10:24.068 10:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:10:24.326 10:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:10:24.584 10:50:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:10:24.860 [2024-05-15 10:50:41.036899] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:24.860 10:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:10:24.860 10:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:10:24.860 10:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:10:25.132 Malloc2 00:10:25.390 10:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:10:25.647 10:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:10:25.905 10:50:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2747227 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@946 -- # '[' -z 2747227 ']' 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@950 -- # kill -0 2747227 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # uname 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2747227 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2747227' 00:10:26.163 killing process with pid 2747227 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@965 -- # kill 2747227 00:10:26.163 [2024-05-15 10:50:42.257195] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:26.163 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@970 -- # wait 2747227 00:10:26.421 10:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:10:26.421 10:50:42 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:26.421 00:10:26.421 real 0m52.891s 00:10:26.421 user 3m28.547s 00:10:26.421 sys 0m4.534s 00:10:26.421 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:26.421 10:50:42 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:10:26.421 ************************************ 00:10:26.421 END TEST nvmf_vfio_user 00:10:26.421 ************************************ 00:10:26.421 10:50:42 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:26.421 10:50:42 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:26.421 10:50:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:26.421 10:50:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:26.421 ************************************ 00:10:26.422 START TEST nvmf_vfio_user_nvme_compliance 00:10:26.422 ************************************ 00:10:26.422 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:10:26.681 * Looking for test storage... 00:10:26.681 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2747825 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2747825' 00:10:26.681 Process pid: 2747825 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2747825 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@827 -- # '[' -z 2747825 ']' 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:26.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:26.681 10:50:42 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:26.681 [2024-05-15 10:50:42.741473] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:10:26.681 [2024-05-15 10:50:42.741558] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:26.681 EAL: No free 2048 kB hugepages reported on node 1 00:10:26.681 [2024-05-15 10:50:42.819593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:26.940 [2024-05-15 10:50:42.936430] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:26.940 [2024-05-15 10:50:42.936485] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:26.940 [2024-05-15 10:50:42.936501] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:26.940 [2024-05-15 10:50:42.936515] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:26.940 [2024-05-15 10:50:42.936527] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:26.940 [2024-05-15 10:50:42.936586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.940 [2024-05-15 10:50:42.936637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:26.940 [2024-05-15 10:50:42.936641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.505 10:50:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:27.505 10:50:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # return 0 00:10:27.505 10:50:43 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:28.878 malloc0 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:28.878 [2024-05-15 10:50:44.790247] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.878 10:50:44 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:10:28.878 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.878 00:10:28.878 00:10:28.878 CUnit - A unit testing framework for C - Version 2.1-3 00:10:28.878 http://cunit.sourceforge.net/ 00:10:28.878 00:10:28.878 00:10:28.878 Suite: nvme_compliance 00:10:28.878 Test: admin_identify_ctrlr_verify_dptr ...[2024-05-15 10:50:44.968469] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:28.878 [2024-05-15 10:50:44.969902] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:10:28.878 [2024-05-15 10:50:44.969950] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:10:28.878 [2024-05-15 10:50:44.969971] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:10:28.878 [2024-05-15 10:50:44.973502] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:28.878 passed 00:10:28.878 Test: admin_identify_ctrlr_verify_fused ...[2024-05-15 10:50:45.058079] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:28.878 [2024-05-15 10:50:45.061107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:28.878 passed 00:10:29.136 Test: admin_identify_ns ...[2024-05-15 10:50:45.150534] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.136 [2024-05-15 10:50:45.210975] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:10:29.136 [2024-05-15 10:50:45.218946] ctrlr.c:2706:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:10:29.136 [2024-05-15 10:50:45.240059] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.136 passed 00:10:29.136 Test: admin_get_features_mandatory_features ...[2024-05-15 10:50:45.321811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.136 [2024-05-15 10:50:45.324834] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.136 passed 00:10:29.395 Test: admin_get_features_optional_features ...[2024-05-15 10:50:45.409396] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.395 [2024-05-15 10:50:45.412422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.395 passed 00:10:29.395 Test: admin_set_features_number_of_queues ...[2024-05-15 10:50:45.495661] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.395 [2024-05-15 10:50:45.600058] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.653 passed 00:10:29.653 Test: admin_get_log_page_mandatory_logs ...[2024-05-15 10:50:45.684165] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.653 [2024-05-15 10:50:45.687188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.653 passed 00:10:29.653 Test: admin_get_log_page_with_lpo ...[2024-05-15 10:50:45.770420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.653 [2024-05-15 10:50:45.837944] ctrlr.c:2654:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:10:29.653 [2024-05-15 10:50:45.851017] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.653 passed 00:10:29.911 Test: fabric_property_get ...[2024-05-15 10:50:45.934497] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.911 [2024-05-15 10:50:45.935768] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:10:29.911 [2024-05-15 10:50:45.937516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.911 passed 00:10:29.911 Test: admin_delete_io_sq_use_admin_qid ...[2024-05-15 10:50:46.021124] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:29.911 [2024-05-15 10:50:46.022417] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:10:29.911 [2024-05-15 10:50:46.024153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:29.911 passed 00:10:29.911 Test: admin_delete_io_sq_delete_sq_twice ...[2024-05-15 10:50:46.109624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:30.169 [2024-05-15 10:50:46.192940] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:30.169 [2024-05-15 10:50:46.208944] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:30.169 [2024-05-15 10:50:46.214046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:30.169 passed 00:10:30.169 Test: admin_delete_io_cq_use_admin_qid ...[2024-05-15 10:50:46.297807] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:30.169 [2024-05-15 10:50:46.299102] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:10:30.169 [2024-05-15 10:50:46.300830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:30.169 passed 00:10:30.169 Test: admin_delete_io_cq_delete_cq_first ...[2024-05-15 10:50:46.383505] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:30.427 [2024-05-15 10:50:46.459939] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:30.427 [2024-05-15 10:50:46.483960] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:10:30.427 [2024-05-15 10:50:46.489055] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:30.427 passed 00:10:30.427 Test: admin_create_io_cq_verify_iv_pc ...[2024-05-15 10:50:46.572869] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:30.427 [2024-05-15 10:50:46.574164] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:10:30.427 [2024-05-15 10:50:46.574217] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:10:30.427 [2024-05-15 10:50:46.575894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:30.427 passed 00:10:30.427 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-05-15 10:50:46.657074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:30.685 [2024-05-15 10:50:46.752953] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:10:30.685 [2024-05-15 10:50:46.760942] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:10:30.685 [2024-05-15 10:50:46.768943] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:10:30.685 [2024-05-15 10:50:46.776938] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:10:30.685 [2024-05-15 10:50:46.806028] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:30.685 passed 00:10:30.685 Test: admin_create_io_sq_verify_pc ...[2024-05-15 10:50:46.885704] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:30.685 [2024-05-15 10:50:46.901955] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:10:30.943 [2024-05-15 10:50:46.920019] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:30.943 passed 00:10:30.943 Test: admin_create_io_qp_max_qps ...[2024-05-15 10:50:47.006594] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:31.878 [2024-05-15 10:50:48.103946] nvme_ctrlr.c:5330:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:10:32.444 [2024-05-15 10:50:48.488573] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.444 passed 00:10:32.444 Test: admin_create_io_sq_shared_cq ...[2024-05-15 10:50:48.573839] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:10:32.702 [2024-05-15 10:50:48.702942] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:10:32.702 [2024-05-15 10:50:48.740050] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:10:32.702 passed 00:10:32.702 00:10:32.702 Run Summary: Type Total Ran Passed Failed Inactive 00:10:32.702 suites 1 1 n/a 0 0 00:10:32.702 tests 18 18 18 0 0 00:10:32.702 asserts 360 360 360 0 n/a 00:10:32.702 00:10:32.702 Elapsed time = 1.564 seconds 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2747825 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@946 -- # '[' -z 2747825 ']' 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # kill -0 2747825 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # uname 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2747825 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2747825' 00:10:32.702 killing process with pid 2747825 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@965 -- # kill 2747825 00:10:32.702 [2024-05-15 10:50:48.824204] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:10:32.702 10:50:48 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # wait 2747825 00:10:32.961 10:50:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:10:32.961 10:50:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:10:32.961 00:10:32.961 real 0m6.501s 00:10:32.961 user 0m18.467s 00:10:32.961 sys 0m0.625s 00:10:32.961 10:50:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1122 -- # xtrace_disable 00:10:32.961 10:50:49 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:10:32.961 ************************************ 00:10:32.961 END TEST nvmf_vfio_user_nvme_compliance 00:10:32.961 ************************************ 00:10:32.961 10:50:49 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:32.961 10:50:49 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:10:32.961 10:50:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:10:32.961 10:50:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.961 ************************************ 00:10:32.961 START TEST nvmf_vfio_user_fuzz 00:10:32.961 ************************************ 00:10:32.961 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:10:33.220 * Looking for test storage... 00:10:33.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.220 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.220 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2748668 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2748668' 00:10:33.221 Process pid: 2748668 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2748668 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@827 -- # '[' -z 2748668 ']' 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@832 -- # local max_retries=100 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # xtrace_disable 00:10:33.221 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:33.480 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:10:33.480 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # return 0 00:10:33.480 10:50:49 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:34.414 malloc0 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.414 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:10:34.673 10:50:50 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:11:06.784 Fuzzing completed. Shutting down the fuzz application 00:11:06.784 00:11:06.784 Dumping successful admin opcodes: 00:11:06.784 8, 9, 10, 24, 00:11:06.784 Dumping successful io opcodes: 00:11:06.784 0, 00:11:06.784 NS: 0x200003a1ef00 I/O qp, Total commands completed: 559085, total successful commands: 2152, random_seed: 568322944 00:11:06.784 NS: 0x200003a1ef00 admin qp, Total commands completed: 104991, total successful commands: 865, random_seed: 1305906560 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2748668 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@946 -- # '[' -z 2748668 ']' 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # kill -0 2748668 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # uname 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2748668 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2748668' 00:11:06.784 killing process with pid 2748668 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@965 -- # kill 2748668 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # wait 2748668 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:06.784 00:11:06.784 real 0m32.398s 00:11:06.784 user 0m31.272s 00:11:06.784 sys 0m28.135s 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:06.784 10:51:21 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:11:06.785 ************************************ 00:11:06.785 END TEST nvmf_vfio_user_fuzz 00:11:06.785 ************************************ 00:11:06.785 10:51:21 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:06.785 10:51:21 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:06.785 10:51:21 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:06.785 10:51:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:06.785 ************************************ 00:11:06.785 START TEST nvmf_host_management 00:11:06.785 ************************************ 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:06.785 * Looking for test storage... 00:11:06.785 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:11:06.785 10:51:21 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:08.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:08.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:08.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:08.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:08.163 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:08.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:11:08.164 00:11:08.164 --- 10.0.0.2 ping statistics --- 00:11:08.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.164 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:11:08.164 00:11:08.164 --- 10.0.0.1 ping statistics --- 00:11:08.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.164 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2755045 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2755045 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2755045 ']' 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:08.164 10:51:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:08.164 [2024-05-15 10:51:24.335192] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:08.164 [2024-05-15 10:51:24.335291] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.164 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.422 [2024-05-15 10:51:24.418082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.422 [2024-05-15 10:51:24.537894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.422 [2024-05-15 10:51:24.537974] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.422 [2024-05-15 10:51:24.538001] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.422 [2024-05-15 10:51:24.538015] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.422 [2024-05-15 10:51:24.538028] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.422 [2024-05-15 10:51:24.538111] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.422 [2024-05-15 10:51:24.538235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.422 [2024-05-15 10:51:24.538370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:08.422 [2024-05-15 10:51:24.538372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:09.356 [2024-05-15 10:51:25.297766] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:09.356 Malloc0 00:11:09.356 [2024-05-15 10:51:25.356890] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:09.356 [2024-05-15 10:51:25.357268] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.356 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2755219 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2755219 /var/tmp/bdevperf.sock 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@827 -- # '[' -z 2755219 ']' 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:09.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:09.357 { 00:11:09.357 "params": { 00:11:09.357 "name": "Nvme$subsystem", 00:11:09.357 "trtype": "$TEST_TRANSPORT", 00:11:09.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.357 "adrfam": "ipv4", 00:11:09.357 "trsvcid": "$NVMF_PORT", 00:11:09.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.357 "hdgst": ${hdgst:-false}, 00:11:09.357 "ddgst": ${ddgst:-false} 00:11:09.357 }, 00:11:09.357 "method": "bdev_nvme_attach_controller" 00:11:09.357 } 00:11:09.357 EOF 00:11:09.357 )") 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:09.357 10:51:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:09.357 "params": { 00:11:09.357 "name": "Nvme0", 00:11:09.357 "trtype": "tcp", 00:11:09.357 "traddr": "10.0.0.2", 00:11:09.357 "adrfam": "ipv4", 00:11:09.357 "trsvcid": "4420", 00:11:09.357 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:09.357 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:09.357 "hdgst": false, 00:11:09.357 "ddgst": false 00:11:09.357 }, 00:11:09.357 "method": "bdev_nvme_attach_controller" 00:11:09.357 }' 00:11:09.357 [2024-05-15 10:51:25.432072] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:09.357 [2024-05-15 10:51:25.432161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755219 ] 00:11:09.357 EAL: No free 2048 kB hugepages reported on node 1 00:11:09.357 [2024-05-15 10:51:25.505782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.615 [2024-05-15 10:51:25.618316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.615 Running I/O for 10 seconds... 00:11:09.873 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:09.873 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@860 -- # return 0 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=3 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 3 -ge 100 ']' 00:11:09.874 10:51:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.134 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.134 [2024-05-15 10:51:26.232043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10bd9d0 is same with the state(5) to be set 00:11:10.134 [2024-05-15 10:51:26.232492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.232962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.232988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.134 [2024-05-15 10:51:26.233400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.134 [2024-05-15 10:51:26.233417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.233966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.233992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:10.135 [2024-05-15 10:51:26.234667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.135 [2024-05-15 10:51:26.234764] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x115ff20 was disconnected and freed. reset controller. 00:11:10.135 [2024-05-15 10:51:26.235925] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:11:10.135 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.135 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:10.135 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:10.135 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:10.135 task offset: 54528 on job bdev=Nvme0n1 fails 00:11:10.135 00:11:10.135 Latency(us) 00:11:10.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:10.136 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:10.136 Job: Nvme0n1 ended in about 0.39 seconds with error 00:11:10.136 Verification LBA range: start 0x0 length 0x400 00:11:10.136 Nvme0n1 : 0.39 977.04 61.06 162.84 0.00 54628.80 2852.03 47768.46 00:11:10.136 =================================================================================================================== 00:11:10.136 Total : 977.04 61.06 162.84 0.00 54628.80 2852.03 47768.46 00:11:10.136 [2024-05-15 10:51:26.237973] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:10.136 [2024-05-15 10:51:26.238009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2e990 (9): Bad file descriptor 00:11:10.136 [2024-05-15 10:51:26.239864] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:11:10.136 [2024-05-15 10:51:26.240168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:11:10.136 [2024-05-15 10:51:26.240198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:10.136 [2024-05-15 10:51:26.240222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:11:10.136 [2024-05-15 10:51:26.240248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:11:10.136 [2024-05-15 10:51:26.240264] nvme_tcp.c:2426:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:11:10.136 [2024-05-15 10:51:26.240278] nvme_tcp.c:2216:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd2e990 00:11:10.136 [2024-05-15 10:51:26.240320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd2e990 (9): Bad file descriptor 00:11:10.136 [2024-05-15 10:51:26.240347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:11:10.136 [2024-05-15 10:51:26.240363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:11:10.136 [2024-05-15 10:51:26.240380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:11:10.136 [2024-05-15 10:51:26.240402] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:11:10.136 10:51:26 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:10.136 10:51:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2755219 00:11:11.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2755219) - No such process 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:11.069 { 00:11:11.069 "params": { 00:11:11.069 "name": "Nvme$subsystem", 00:11:11.069 "trtype": "$TEST_TRANSPORT", 00:11:11.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:11.069 "adrfam": "ipv4", 00:11:11.069 "trsvcid": "$NVMF_PORT", 00:11:11.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:11.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:11.069 "hdgst": ${hdgst:-false}, 00:11:11.069 "ddgst": ${ddgst:-false} 00:11:11.069 }, 00:11:11.069 "method": "bdev_nvme_attach_controller" 00:11:11.069 } 00:11:11.069 EOF 00:11:11.069 )") 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:11:11.069 10:51:27 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:11.069 "params": { 00:11:11.069 "name": "Nvme0", 00:11:11.069 "trtype": "tcp", 00:11:11.069 "traddr": "10.0.0.2", 00:11:11.069 "adrfam": "ipv4", 00:11:11.069 "trsvcid": "4420", 00:11:11.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:11.070 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:11.070 "hdgst": false, 00:11:11.070 "ddgst": false 00:11:11.070 }, 00:11:11.070 "method": "bdev_nvme_attach_controller" 00:11:11.070 }' 00:11:11.070 [2024-05-15 10:51:27.292483] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:11.070 [2024-05-15 10:51:27.292570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755491 ] 00:11:11.328 EAL: No free 2048 kB hugepages reported on node 1 00:11:11.328 [2024-05-15 10:51:27.363258] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:11.328 [2024-05-15 10:51:27.476948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.585 Running I/O for 1 seconds... 00:11:12.541 00:11:12.541 Latency(us) 00:11:12.541 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:12.541 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:12.541 Verification LBA range: start 0x0 length 0x400 00:11:12.541 Nvme0n1 : 1.04 1041.59 65.10 0.00 0.00 60624.90 14369.37 47185.92 00:11:12.541 =================================================================================================================== 00:11:12.541 Total : 1041.59 65.10 0.00 0.00 60624.90 14369.37 47185.92 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:12.798 10:51:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:12.798 rmmod nvme_tcp 00:11:12.798 rmmod nvme_fabrics 00:11:12.798 rmmod nvme_keyring 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2755045 ']' 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2755045 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@946 -- # '[' -z 2755045 ']' 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@950 -- # kill -0 2755045 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # uname 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:12.798 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2755045 00:11:13.056 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:13.056 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:13.056 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2755045' 00:11:13.056 killing process with pid 2755045 00:11:13.056 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # kill 2755045 00:11:13.056 [2024-05-15 10:51:29.055207] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:13.056 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@970 -- # wait 2755045 00:11:13.315 [2024-05-15 10:51:29.311958] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:13.315 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:13.315 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:13.315 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:13.315 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:13.315 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:13.315 10:51:29 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.315 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.315 10:51:29 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.219 10:51:31 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:15.219 10:51:31 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:15.219 00:11:15.219 real 0m9.740s 00:11:15.219 user 0m21.786s 00:11:15.219 sys 0m2.964s 00:11:15.219 10:51:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:15.219 10:51:31 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:15.219 ************************************ 00:11:15.219 END TEST nvmf_host_management 00:11:15.219 ************************************ 00:11:15.219 10:51:31 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:15.219 10:51:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:15.219 10:51:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:15.219 10:51:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:15.219 ************************************ 00:11:15.219 START TEST nvmf_lvol 00:11:15.219 ************************************ 00:11:15.219 10:51:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:15.478 * Looking for test storage... 00:11:15.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:11:15.478 10:51:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.010 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:18.011 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:18.011 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:18.011 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:18.011 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.011 10:51:33 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.011 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.011 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:11:18.011 00:11:18.011 --- 10.0.0.2 ping statistics --- 00:11:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.011 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.011 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.011 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:11:18.011 00:11:18.011 --- 10.0.0.1 ping statistics --- 00:11:18.011 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.011 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2757984 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2757984 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@827 -- # '[' -z 2757984 ']' 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:18.011 10:51:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:18.011 [2024-05-15 10:51:34.206075] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:18.011 [2024-05-15 10:51:34.206167] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.270 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.270 [2024-05-15 10:51:34.288230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:18.270 [2024-05-15 10:51:34.404404] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.270 [2024-05-15 10:51:34.404473] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.270 [2024-05-15 10:51:34.404490] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.270 [2024-05-15 10:51:34.404504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.270 [2024-05-15 10:51:34.404516] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.270 [2024-05-15 10:51:34.404600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.270 [2024-05-15 10:51:34.404672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.270 [2024-05-15 10:51:34.404675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.205 10:51:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:19.205 10:51:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@860 -- # return 0 00:11:19.205 10:51:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.205 10:51:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:19.205 10:51:35 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:19.205 10:51:35 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.205 10:51:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:19.205 [2024-05-15 10:51:35.384108] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.205 10:51:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:19.463 10:51:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:11:19.463 10:51:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:19.722 10:51:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:11:19.722 10:51:35 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:11:19.982 10:51:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:11:20.242 10:51:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d5dbd92c-c263-41ab-9215-29393bfb8923 00:11:20.242 10:51:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d5dbd92c-c263-41ab-9215-29393bfb8923 lvol 20 00:11:20.508 10:51:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=e53305c7-a5db-475c-81a8-df7964574111 00:11:20.508 10:51:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:20.846 10:51:36 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e53305c7-a5db-475c-81a8-df7964574111 00:11:21.103 10:51:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:21.361 [2024-05-15 10:51:37.422409] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:21.361 [2024-05-15 10:51:37.422709] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.361 10:51:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:21.620 10:51:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2758418 00:11:21.620 10:51:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:11:21.620 10:51:37 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:11:21.620 EAL: No free 2048 kB hugepages reported on node 1 00:11:22.554 10:51:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot e53305c7-a5db-475c-81a8-df7964574111 MY_SNAPSHOT 00:11:22.813 10:51:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5705e4cb-7e35-4582-9bcb-98a1e3a777c1 00:11:22.813 10:51:38 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize e53305c7-a5db-475c-81a8-df7964574111 30 00:11:23.071 10:51:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 5705e4cb-7e35-4582-9bcb-98a1e3a777c1 MY_CLONE 00:11:23.328 10:51:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=31dec291-8421-48d9-a441-59242a663a27 00:11:23.328 10:51:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 31dec291-8421-48d9-a441-59242a663a27 00:11:23.892 10:51:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2758418 00:11:31.995 Initializing NVMe Controllers 00:11:31.995 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:31.995 Controller IO queue size 128, less than required. 00:11:31.995 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:31.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:11:31.995 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:11:31.995 Initialization complete. Launching workers. 00:11:31.995 ======================================================== 00:11:31.995 Latency(us) 00:11:31.995 Device Information : IOPS MiB/s Average min max 00:11:31.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11019.10 43.04 11621.66 1403.57 71207.65 00:11:31.995 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10936.60 42.72 11709.02 2000.38 62486.47 00:11:31.995 ======================================================== 00:11:31.995 Total : 21955.70 85.76 11665.17 1403.57 71207.65 00:11:31.995 00:11:31.995 10:51:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:32.288 10:51:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e53305c7-a5db-475c-81a8-df7964574111 00:11:32.556 10:51:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d5dbd92c-c263-41ab-9215-29393bfb8923 00:11:32.815 10:51:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:11:32.815 10:51:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:11:32.815 10:51:48 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:11:32.815 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:32.815 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:11:32.815 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:32.815 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:32.816 rmmod nvme_tcp 00:11:32.816 rmmod nvme_fabrics 00:11:32.816 rmmod nvme_keyring 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2757984 ']' 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2757984 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@946 -- # '[' -z 2757984 ']' 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@950 -- # kill -0 2757984 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # uname 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2757984 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2757984' 00:11:32.816 killing process with pid 2757984 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # kill 2757984 00:11:32.816 [2024-05-15 10:51:48.938156] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:11:32.816 10:51:48 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@970 -- # wait 2757984 00:11:33.075 10:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:33.075 10:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:33.075 10:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:33.075 10:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:33.075 10:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:33.075 10:51:49 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.075 10:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:33.075 10:51:49 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:35.646 00:11:35.646 real 0m19.860s 00:11:35.646 user 1m5.810s 00:11:35.646 sys 0m6.117s 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:11:35.646 ************************************ 00:11:35.646 END TEST nvmf_lvol 00:11:35.646 ************************************ 00:11:35.646 10:51:51 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:35.646 10:51:51 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:35.646 10:51:51 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:35.646 10:51:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:35.646 ************************************ 00:11:35.646 START TEST nvmf_lvs_grow 00:11:35.646 ************************************ 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:11:35.646 * Looking for test storage... 00:11:35.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:11:35.646 10:51:51 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:38.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:38.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:38.178 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:38.179 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:38.179 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:38.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:38.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:11:38.179 00:11:38.179 --- 10.0.0.2 ping statistics --- 00:11:38.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.179 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:38.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:38.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:38.179 00:11:38.179 --- 10.0.0.1 ping statistics --- 00:11:38.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:38.179 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@720 -- # xtrace_disable 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2762092 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2762092 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # '[' -z 2762092 ']' 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:38.179 10:51:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.179 [2024-05-15 10:51:54.011865] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:38.179 [2024-05-15 10:51:54.011962] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.179 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.179 [2024-05-15 10:51:54.088499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.179 [2024-05-15 10:51:54.205766] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.179 [2024-05-15 10:51:54.205830] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.179 [2024-05-15 10:51:54.205846] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.179 [2024-05-15 10:51:54.205859] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.179 [2024-05-15 10:51:54.205871] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.179 [2024-05-15 10:51:54.205902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.179 10:51:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:38.179 10:51:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # return 0 00:11:38.179 10:51:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:38.179 10:51:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.179 10:51:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.179 10:51:54 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.179 10:51:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:38.438 [2024-05-15 10:51:54.620620] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.438 10:51:54 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:11:38.438 10:51:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:11:38.438 10:51:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:38.438 10:51:54 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:38.696 ************************************ 00:11:38.696 START TEST lvs_grow_clean 00:11:38.696 ************************************ 00:11:38.696 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1121 -- # lvs_grow 00:11:38.696 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:38.696 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:38.696 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:38.696 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:38.696 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:38.696 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:38.696 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:38.697 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:38.697 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:38.955 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:38.955 10:51:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:39.213 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:39.213 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:39.213 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:39.472 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:39.472 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:39.472 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 37a13936-5ec7-4590-82c7-3d0650e44b1b lvol 150 00:11:39.731 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=24070376-3739-488e-840e-fa7f1c794753 00:11:39.731 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:39.731 10:51:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:39.989 [2024-05-15 10:51:56.038327] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:39.989 [2024-05-15 10:51:56.038415] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:39.989 true 00:11:39.989 10:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:39.989 10:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:40.248 10:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:40.248 10:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:40.507 10:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 24070376-3739-488e-840e-fa7f1c794753 00:11:40.766 10:51:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:41.023 [2024-05-15 10:51:57.037112] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:41.023 [2024-05-15 10:51:57.037382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.023 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2762532 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2762532 /var/tmp/bdevperf.sock 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@827 -- # '[' -z 2762532 ']' 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:41.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:41.280 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:41.280 [2024-05-15 10:51:57.332583] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:41.280 [2024-05-15 10:51:57.332652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762532 ] 00:11:41.280 EAL: No free 2048 kB hugepages reported on node 1 00:11:41.280 [2024-05-15 10:51:57.404995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.538 [2024-05-15 10:51:57.522484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.538 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:11:41.538 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # return 0 00:11:41.538 10:51:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:42.103 Nvme0n1 00:11:42.103 10:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:42.361 [ 00:11:42.361 { 00:11:42.361 "name": "Nvme0n1", 00:11:42.361 "aliases": [ 00:11:42.361 "24070376-3739-488e-840e-fa7f1c794753" 00:11:42.361 ], 00:11:42.361 "product_name": "NVMe disk", 00:11:42.361 "block_size": 4096, 00:11:42.361 "num_blocks": 38912, 00:11:42.361 "uuid": "24070376-3739-488e-840e-fa7f1c794753", 00:11:42.361 "assigned_rate_limits": { 00:11:42.361 "rw_ios_per_sec": 0, 00:11:42.361 "rw_mbytes_per_sec": 0, 00:11:42.361 "r_mbytes_per_sec": 0, 00:11:42.361 "w_mbytes_per_sec": 0 00:11:42.361 }, 00:11:42.361 "claimed": false, 00:11:42.361 "zoned": false, 00:11:42.361 "supported_io_types": { 00:11:42.361 "read": true, 00:11:42.361 "write": true, 00:11:42.361 "unmap": true, 00:11:42.361 "write_zeroes": true, 00:11:42.361 "flush": true, 00:11:42.361 "reset": true, 00:11:42.361 "compare": true, 00:11:42.361 "compare_and_write": true, 00:11:42.361 "abort": true, 00:11:42.361 "nvme_admin": true, 00:11:42.361 "nvme_io": true 00:11:42.361 }, 00:11:42.361 "memory_domains": [ 00:11:42.361 { 00:11:42.361 "dma_device_id": "system", 00:11:42.361 "dma_device_type": 1 00:11:42.361 } 00:11:42.361 ], 00:11:42.361 "driver_specific": { 00:11:42.361 "nvme": [ 00:11:42.361 { 00:11:42.361 "trid": { 00:11:42.361 "trtype": "TCP", 00:11:42.361 "adrfam": "IPv4", 00:11:42.361 "traddr": "10.0.0.2", 00:11:42.361 "trsvcid": "4420", 00:11:42.361 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:42.361 }, 00:11:42.361 "ctrlr_data": { 00:11:42.361 "cntlid": 1, 00:11:42.361 "vendor_id": "0x8086", 00:11:42.361 "model_number": "SPDK bdev Controller", 00:11:42.361 "serial_number": "SPDK0", 00:11:42.361 "firmware_revision": "24.05", 00:11:42.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:42.361 "oacs": { 00:11:42.361 "security": 0, 00:11:42.361 "format": 0, 00:11:42.361 "firmware": 0, 00:11:42.361 "ns_manage": 0 00:11:42.361 }, 00:11:42.361 "multi_ctrlr": true, 00:11:42.361 "ana_reporting": false 00:11:42.361 }, 00:11:42.361 "vs": { 00:11:42.361 "nvme_version": "1.3" 00:11:42.361 }, 00:11:42.361 "ns_data": { 00:11:42.361 "id": 1, 00:11:42.361 "can_share": true 00:11:42.361 } 00:11:42.361 } 00:11:42.361 ], 00:11:42.361 "mp_policy": "active_passive" 00:11:42.361 } 00:11:42.361 } 00:11:42.361 ] 00:11:42.361 10:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2762668 00:11:42.361 10:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:42.361 10:51:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:42.361 Running I/O for 10 seconds... 00:11:43.295 Latency(us) 00:11:43.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:43.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:43.295 Nvme0n1 : 1.00 13967.00 54.56 0.00 0.00 0.00 0.00 0.00 00:11:43.295 =================================================================================================================== 00:11:43.295 Total : 13967.00 54.56 0.00 0.00 0.00 0.00 0.00 00:11:43.295 00:11:44.228 10:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:44.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:44.486 Nvme0n1 : 2.00 14087.50 55.03 0.00 0.00 0.00 0.00 0.00 00:11:44.486 =================================================================================================================== 00:11:44.486 Total : 14087.50 55.03 0.00 0.00 0.00 0.00 0.00 00:11:44.486 00:11:44.486 true 00:11:44.486 10:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:44.486 10:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:44.744 10:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:44.744 10:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:44.744 10:52:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2762668 00:11:45.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:45.310 Nvme0n1 : 3.00 14298.33 55.85 0.00 0.00 0.00 0.00 0.00 00:11:45.310 =================================================================================================================== 00:11:45.310 Total : 14298.33 55.85 0.00 0.00 0.00 0.00 0.00 00:11:45.310 00:11:46.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:46.682 Nvme0n1 : 4.00 14467.75 56.51 0.00 0.00 0.00 0.00 0.00 00:11:46.682 =================================================================================================================== 00:11:46.682 Total : 14467.75 56.51 0.00 0.00 0.00 0.00 0.00 00:11:46.682 00:11:47.614 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:47.614 Nvme0n1 : 5.00 14505.20 56.66 0.00 0.00 0.00 0.00 0.00 00:11:47.614 =================================================================================================================== 00:11:47.614 Total : 14505.20 56.66 0.00 0.00 0.00 0.00 0.00 00:11:47.614 00:11:48.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:48.546 Nvme0n1 : 6.00 14594.33 57.01 0.00 0.00 0.00 0.00 0.00 00:11:48.546 =================================================================================================================== 00:11:48.546 Total : 14594.33 57.01 0.00 0.00 0.00 0.00 0.00 00:11:48.546 00:11:49.480 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:49.480 Nvme0n1 : 7.00 14676.29 57.33 0.00 0.00 0.00 0.00 0.00 00:11:49.480 =================================================================================================================== 00:11:49.480 Total : 14676.29 57.33 0.00 0.00 0.00 0.00 0.00 00:11:49.480 00:11:50.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:50.439 Nvme0n1 : 8.00 14697.75 57.41 0.00 0.00 0.00 0.00 0.00 00:11:50.440 =================================================================================================================== 00:11:50.440 Total : 14697.75 57.41 0.00 0.00 0.00 0.00 0.00 00:11:50.440 00:11:51.374 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:51.374 Nvme0n1 : 9.00 14742.89 57.59 0.00 0.00 0.00 0.00 0.00 00:11:51.374 =================================================================================================================== 00:11:51.374 Total : 14742.89 57.59 0.00 0.00 0.00 0.00 0.00 00:11:51.374 00:11:52.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.309 Nvme0n1 : 10.00 14740.60 57.58 0.00 0.00 0.00 0.00 0.00 00:11:52.309 =================================================================================================================== 00:11:52.309 Total : 14740.60 57.58 0.00 0.00 0.00 0.00 0.00 00:11:52.309 00:11:52.309 00:11:52.309 Latency(us) 00:11:52.309 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:52.309 Nvme0n1 : 10.01 14741.91 57.59 0.00 0.00 8676.44 5558.42 15631.55 00:11:52.309 =================================================================================================================== 00:11:52.309 Total : 14741.91 57.59 0.00 0.00 8676.44 5558.42 15631.55 00:11:52.309 0 00:11:52.309 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2762532 00:11:52.309 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@946 -- # '[' -z 2762532 ']' 00:11:52.309 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # kill -0 2762532 00:11:52.309 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # uname 00:11:52.309 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:11:52.309 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2762532 00:11:52.569 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:11:52.569 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:11:52.569 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2762532' 00:11:52.569 killing process with pid 2762532 00:11:52.569 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # kill 2762532 00:11:52.569 Received shutdown signal, test time was about 10.000000 seconds 00:11:52.569 00:11:52.569 Latency(us) 00:11:52.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:52.569 =================================================================================================================== 00:11:52.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:52.569 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # wait 2762532 00:11:52.827 10:52:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:53.085 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:53.343 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:53.343 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:53.601 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:53.601 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:53.601 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:53.860 [2024-05-15 10:52:09.871749] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:53.860 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:53.860 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:11:53.860 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:53.860 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.860 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.860 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.860 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.861 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.861 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:53.861 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.861 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:53.861 10:52:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:54.119 request: 00:11:54.119 { 00:11:54.119 "uuid": "37a13936-5ec7-4590-82c7-3d0650e44b1b", 00:11:54.119 "method": "bdev_lvol_get_lvstores", 00:11:54.119 "req_id": 1 00:11:54.119 } 00:11:54.119 Got JSON-RPC error response 00:11:54.119 response: 00:11:54.119 { 00:11:54.119 "code": -19, 00:11:54.119 "message": "No such device" 00:11:54.119 } 00:11:54.119 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:11:54.119 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:54.119 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:54.119 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:54.119 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:54.423 aio_bdev 00:11:54.423 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 24070376-3739-488e-840e-fa7f1c794753 00:11:54.423 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@895 -- # local bdev_name=24070376-3739-488e-840e-fa7f1c794753 00:11:54.423 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:11:54.423 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local i 00:11:54.423 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:11:54.423 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:11:54.423 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:54.684 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 24070376-3739-488e-840e-fa7f1c794753 -t 2000 00:11:54.943 [ 00:11:54.943 { 00:11:54.943 "name": "24070376-3739-488e-840e-fa7f1c794753", 00:11:54.943 "aliases": [ 00:11:54.943 "lvs/lvol" 00:11:54.943 ], 00:11:54.943 "product_name": "Logical Volume", 00:11:54.943 "block_size": 4096, 00:11:54.943 "num_blocks": 38912, 00:11:54.943 "uuid": "24070376-3739-488e-840e-fa7f1c794753", 00:11:54.943 "assigned_rate_limits": { 00:11:54.943 "rw_ios_per_sec": 0, 00:11:54.943 "rw_mbytes_per_sec": 0, 00:11:54.943 "r_mbytes_per_sec": 0, 00:11:54.943 "w_mbytes_per_sec": 0 00:11:54.943 }, 00:11:54.943 "claimed": false, 00:11:54.943 "zoned": false, 00:11:54.943 "supported_io_types": { 00:11:54.943 "read": true, 00:11:54.943 "write": true, 00:11:54.943 "unmap": true, 00:11:54.943 "write_zeroes": true, 00:11:54.943 "flush": false, 00:11:54.943 "reset": true, 00:11:54.943 "compare": false, 00:11:54.943 "compare_and_write": false, 00:11:54.943 "abort": false, 00:11:54.943 "nvme_admin": false, 00:11:54.943 "nvme_io": false 00:11:54.943 }, 00:11:54.943 "driver_specific": { 00:11:54.943 "lvol": { 00:11:54.943 "lvol_store_uuid": "37a13936-5ec7-4590-82c7-3d0650e44b1b", 00:11:54.943 "base_bdev": "aio_bdev", 00:11:54.943 "thin_provision": false, 00:11:54.943 "num_allocated_clusters": 38, 00:11:54.943 "snapshot": false, 00:11:54.943 "clone": false, 00:11:54.943 "esnap_clone": false 00:11:54.943 } 00:11:54.943 } 00:11:54.943 } 00:11:54.943 ] 00:11:54.943 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # return 0 00:11:54.943 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:54.943 10:52:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:55.201 10:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:55.201 10:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:55.201 10:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:55.459 10:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:55.459 10:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 24070376-3739-488e-840e-fa7f1c794753 00:11:55.717 10:52:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 37a13936-5ec7-4590-82c7-3d0650e44b1b 00:11:55.976 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:56.234 00:11:56.234 real 0m17.619s 00:11:56.234 user 0m17.042s 00:11:56.234 sys 0m1.927s 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:56.234 ************************************ 00:11:56.234 END TEST lvs_grow_clean 00:11:56.234 ************************************ 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # xtrace_disable 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:56.234 ************************************ 00:11:56.234 START TEST lvs_grow_dirty 00:11:56.234 ************************************ 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1121 -- # lvs_grow dirty 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:56.234 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:56.493 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:56.493 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:56.751 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6813ea60-6801-43e3-8737-79bc98e585c6 00:11:56.751 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:11:56.751 10:52:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:57.008 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:57.008 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:57.008 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6813ea60-6801-43e3-8737-79bc98e585c6 lvol 150 00:11:57.265 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=db6b1475-7008-4511-b2de-3f9cf384e085 00:11:57.265 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:57.265 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:57.521 [2024-05-15 10:52:13.658206] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:57.521 [2024-05-15 10:52:13.658319] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:57.521 true 00:11:57.522 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:11:57.522 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:57.779 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:57.779 10:52:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:58.038 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 db6b1475-7008-4511-b2de-3f9cf384e085 00:11:58.296 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:58.553 [2024-05-15 10:52:14.733519] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.553 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2764597 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2764597 /var/tmp/bdevperf.sock 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2764597 ']' 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:58.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:11:58.810 10:52:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:58.810 [2024-05-15 10:52:15.041043] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:11:58.810 [2024-05-15 10:52:15.041135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764597 ] 00:11:59.068 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.069 [2024-05-15 10:52:15.109675] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:59.069 [2024-05-15 10:52:15.218509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.003 10:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:00.003 10:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:00.003 10:52:15 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:00.261 Nvme0n1 00:12:00.261 10:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:00.519 [ 00:12:00.519 { 00:12:00.519 "name": "Nvme0n1", 00:12:00.519 "aliases": [ 00:12:00.519 "db6b1475-7008-4511-b2de-3f9cf384e085" 00:12:00.519 ], 00:12:00.519 "product_name": "NVMe disk", 00:12:00.519 "block_size": 4096, 00:12:00.519 "num_blocks": 38912, 00:12:00.519 "uuid": "db6b1475-7008-4511-b2de-3f9cf384e085", 00:12:00.519 "assigned_rate_limits": { 00:12:00.519 "rw_ios_per_sec": 0, 00:12:00.519 "rw_mbytes_per_sec": 0, 00:12:00.519 "r_mbytes_per_sec": 0, 00:12:00.519 "w_mbytes_per_sec": 0 00:12:00.519 }, 00:12:00.519 "claimed": false, 00:12:00.519 "zoned": false, 00:12:00.519 "supported_io_types": { 00:12:00.519 "read": true, 00:12:00.519 "write": true, 00:12:00.519 "unmap": true, 00:12:00.519 "write_zeroes": true, 00:12:00.519 "flush": true, 00:12:00.519 "reset": true, 00:12:00.519 "compare": true, 00:12:00.519 "compare_and_write": true, 00:12:00.519 "abort": true, 00:12:00.519 "nvme_admin": true, 00:12:00.519 "nvme_io": true 00:12:00.519 }, 00:12:00.519 "memory_domains": [ 00:12:00.519 { 00:12:00.519 "dma_device_id": "system", 00:12:00.519 "dma_device_type": 1 00:12:00.519 } 00:12:00.519 ], 00:12:00.519 "driver_specific": { 00:12:00.519 "nvme": [ 00:12:00.519 { 00:12:00.519 "trid": { 00:12:00.519 "trtype": "TCP", 00:12:00.519 "adrfam": "IPv4", 00:12:00.519 "traddr": "10.0.0.2", 00:12:00.519 "trsvcid": "4420", 00:12:00.519 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:00.519 }, 00:12:00.519 "ctrlr_data": { 00:12:00.519 "cntlid": 1, 00:12:00.519 "vendor_id": "0x8086", 00:12:00.519 "model_number": "SPDK bdev Controller", 00:12:00.519 "serial_number": "SPDK0", 00:12:00.519 "firmware_revision": "24.05", 00:12:00.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:00.519 "oacs": { 00:12:00.519 "security": 0, 00:12:00.519 "format": 0, 00:12:00.519 "firmware": 0, 00:12:00.519 "ns_manage": 0 00:12:00.519 }, 00:12:00.519 "multi_ctrlr": true, 00:12:00.519 "ana_reporting": false 00:12:00.519 }, 00:12:00.519 "vs": { 00:12:00.519 "nvme_version": "1.3" 00:12:00.519 }, 00:12:00.519 "ns_data": { 00:12:00.519 "id": 1, 00:12:00.519 "can_share": true 00:12:00.519 } 00:12:00.519 } 00:12:00.519 ], 00:12:00.519 "mp_policy": "active_passive" 00:12:00.519 } 00:12:00.519 } 00:12:00.519 ] 00:12:00.519 10:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2764856 00:12:00.519 10:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:00.520 10:52:16 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:00.777 Running I/O for 10 seconds... 00:12:01.711 Latency(us) 00:12:01.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:01.711 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:01.711 Nvme0n1 : 1.00 14108.00 55.11 0.00 0.00 0.00 0.00 0.00 00:12:01.711 =================================================================================================================== 00:12:01.711 Total : 14108.00 55.11 0.00 0.00 0.00 0.00 0.00 00:12:01.711 00:12:02.644 10:52:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:02.644 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:02.644 Nvme0n1 : 2.00 14414.00 56.30 0.00 0.00 0.00 0.00 0.00 00:12:02.644 =================================================================================================================== 00:12:02.644 Total : 14414.00 56.30 0.00 0.00 0.00 0.00 0.00 00:12:02.644 00:12:02.902 true 00:12:02.902 10:52:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:02.902 10:52:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:03.160 10:52:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:03.160 10:52:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:03.160 10:52:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2764856 00:12:03.726 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:03.726 Nvme0n1 : 3.00 14409.33 56.29 0.00 0.00 0.00 0.00 0.00 00:12:03.726 =================================================================================================================== 00:12:03.726 Total : 14409.33 56.29 0.00 0.00 0.00 0.00 0.00 00:12:03.726 00:12:04.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:04.659 Nvme0n1 : 4.00 14451.75 56.45 0.00 0.00 0.00 0.00 0.00 00:12:04.659 =================================================================================================================== 00:12:04.659 Total : 14451.75 56.45 0.00 0.00 0.00 0.00 0.00 00:12:04.659 00:12:05.592 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:05.592 Nvme0n1 : 5.00 14533.60 56.77 0.00 0.00 0.00 0.00 0.00 00:12:05.592 =================================================================================================================== 00:12:05.592 Total : 14533.60 56.77 0.00 0.00 0.00 0.00 0.00 00:12:05.592 00:12:06.991 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:06.991 Nvme0n1 : 6.00 14594.50 57.01 0.00 0.00 0.00 0.00 0.00 00:12:06.991 =================================================================================================================== 00:12:06.991 Total : 14594.50 57.01 0.00 0.00 0.00 0.00 0.00 00:12:06.991 00:12:07.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:07.939 Nvme0n1 : 7.00 14614.29 57.09 0.00 0.00 0.00 0.00 0.00 00:12:07.939 =================================================================================================================== 00:12:07.939 Total : 14614.29 57.09 0.00 0.00 0.00 0.00 0.00 00:12:07.939 00:12:08.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:08.874 Nvme0n1 : 8.00 14649.75 57.23 0.00 0.00 0.00 0.00 0.00 00:12:08.874 =================================================================================================================== 00:12:08.874 Total : 14649.75 57.23 0.00 0.00 0.00 0.00 0.00 00:12:08.874 00:12:09.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:09.809 Nvme0n1 : 9.00 14707.33 57.45 0.00 0.00 0.00 0.00 0.00 00:12:09.809 =================================================================================================================== 00:12:09.809 Total : 14707.33 57.45 0.00 0.00 0.00 0.00 0.00 00:12:09.809 00:12:10.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.745 Nvme0n1 : 10.00 14721.40 57.51 0.00 0.00 0.00 0.00 0.00 00:12:10.745 =================================================================================================================== 00:12:10.745 Total : 14721.40 57.51 0.00 0.00 0.00 0.00 0.00 00:12:10.745 00:12:10.745 00:12:10.745 Latency(us) 00:12:10.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:10.745 Nvme0n1 : 10.01 14725.10 57.52 0.00 0.00 8686.28 2585.03 13398.47 00:12:10.745 =================================================================================================================== 00:12:10.745 Total : 14725.10 57.52 0.00 0.00 8686.28 2585.03 13398.47 00:12:10.745 0 00:12:10.745 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2764597 00:12:10.745 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@946 -- # '[' -z 2764597 ']' 00:12:10.745 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # kill -0 2764597 00:12:10.745 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # uname 00:12:10.746 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:10.746 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2764597 00:12:10.746 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:10.746 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:10.746 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2764597' 00:12:10.746 killing process with pid 2764597 00:12:10.746 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # kill 2764597 00:12:10.746 Received shutdown signal, test time was about 10.000000 seconds 00:12:10.746 00:12:10.746 Latency(us) 00:12:10.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.746 =================================================================================================================== 00:12:10.746 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:10.746 10:52:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # wait 2764597 00:12:11.004 10:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:11.262 10:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:11.520 10:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:11.520 10:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:11.779 10:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:11.779 10:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:11.779 10:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2762092 00:12:11.779 10:52:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2762092 00:12:12.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2762092 Killed "${NVMF_APP[@]}" "$@" 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2766184 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2766184 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@827 -- # '[' -z 2766184 ']' 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:12.038 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.038 [2024-05-15 10:52:28.069020] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:12.038 [2024-05-15 10:52:28.069085] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:12.038 EAL: No free 2048 kB hugepages reported on node 1 00:12:12.038 [2024-05-15 10:52:28.145120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.038 [2024-05-15 10:52:28.260540] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:12.038 [2024-05-15 10:52:28.260587] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:12.038 [2024-05-15 10:52:28.260604] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:12.038 [2024-05-15 10:52:28.260618] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:12.038 [2024-05-15 10:52:28.260630] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:12.038 [2024-05-15 10:52:28.260667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.296 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:12.296 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # return 0 00:12:12.296 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:12.296 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:12.296 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:12.296 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:12.296 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:12.554 [2024-05-15 10:52:28.615017] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:12.554 [2024-05-15 10:52:28.615164] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:12.555 [2024-05-15 10:52:28.615231] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:12.555 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:12.555 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev db6b1475-7008-4511-b2de-3f9cf384e085 00:12:12.555 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=db6b1475-7008-4511-b2de-3f9cf384e085 00:12:12.555 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:12.555 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:12.555 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:12.555 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:12.555 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:12.813 10:52:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db6b1475-7008-4511-b2de-3f9cf384e085 -t 2000 00:12:13.071 [ 00:12:13.071 { 00:12:13.071 "name": "db6b1475-7008-4511-b2de-3f9cf384e085", 00:12:13.071 "aliases": [ 00:12:13.071 "lvs/lvol" 00:12:13.071 ], 00:12:13.071 "product_name": "Logical Volume", 00:12:13.071 "block_size": 4096, 00:12:13.071 "num_blocks": 38912, 00:12:13.071 "uuid": "db6b1475-7008-4511-b2de-3f9cf384e085", 00:12:13.071 "assigned_rate_limits": { 00:12:13.071 "rw_ios_per_sec": 0, 00:12:13.071 "rw_mbytes_per_sec": 0, 00:12:13.071 "r_mbytes_per_sec": 0, 00:12:13.071 "w_mbytes_per_sec": 0 00:12:13.071 }, 00:12:13.071 "claimed": false, 00:12:13.071 "zoned": false, 00:12:13.071 "supported_io_types": { 00:12:13.071 "read": true, 00:12:13.071 "write": true, 00:12:13.071 "unmap": true, 00:12:13.072 "write_zeroes": true, 00:12:13.072 "flush": false, 00:12:13.072 "reset": true, 00:12:13.072 "compare": false, 00:12:13.072 "compare_and_write": false, 00:12:13.072 "abort": false, 00:12:13.072 "nvme_admin": false, 00:12:13.072 "nvme_io": false 00:12:13.072 }, 00:12:13.072 "driver_specific": { 00:12:13.072 "lvol": { 00:12:13.072 "lvol_store_uuid": "6813ea60-6801-43e3-8737-79bc98e585c6", 00:12:13.072 "base_bdev": "aio_bdev", 00:12:13.072 "thin_provision": false, 00:12:13.072 "num_allocated_clusters": 38, 00:12:13.072 "snapshot": false, 00:12:13.072 "clone": false, 00:12:13.072 "esnap_clone": false 00:12:13.072 } 00:12:13.072 } 00:12:13.072 } 00:12:13.072 ] 00:12:13.072 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:13.072 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:13.072 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:13.330 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:13.330 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:13.330 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:13.588 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:13.588 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:13.847 [2024-05-15 10:52:29.924134] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:13.847 10:52:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:14.104 request: 00:12:14.104 { 00:12:14.104 "uuid": "6813ea60-6801-43e3-8737-79bc98e585c6", 00:12:14.104 "method": "bdev_lvol_get_lvstores", 00:12:14.104 "req_id": 1 00:12:14.104 } 00:12:14.104 Got JSON-RPC error response 00:12:14.104 response: 00:12:14.104 { 00:12:14.104 "code": -19, 00:12:14.104 "message": "No such device" 00:12:14.104 } 00:12:14.104 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:12:14.104 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:14.104 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:14.104 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:14.104 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:14.362 aio_bdev 00:12:14.362 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev db6b1475-7008-4511-b2de-3f9cf384e085 00:12:14.362 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@895 -- # local bdev_name=db6b1475-7008-4511-b2de-3f9cf384e085 00:12:14.362 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_timeout= 00:12:14.362 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local i 00:12:14.362 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # [[ -z '' ]] 00:12:14.362 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # bdev_timeout=2000 00:12:14.362 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:14.927 10:52:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b db6b1475-7008-4511-b2de-3f9cf384e085 -t 2000 00:12:14.927 [ 00:12:14.927 { 00:12:14.927 "name": "db6b1475-7008-4511-b2de-3f9cf384e085", 00:12:14.927 "aliases": [ 00:12:14.927 "lvs/lvol" 00:12:14.927 ], 00:12:14.927 "product_name": "Logical Volume", 00:12:14.927 "block_size": 4096, 00:12:14.927 "num_blocks": 38912, 00:12:14.927 "uuid": "db6b1475-7008-4511-b2de-3f9cf384e085", 00:12:14.927 "assigned_rate_limits": { 00:12:14.927 "rw_ios_per_sec": 0, 00:12:14.927 "rw_mbytes_per_sec": 0, 00:12:14.927 "r_mbytes_per_sec": 0, 00:12:14.927 "w_mbytes_per_sec": 0 00:12:14.927 }, 00:12:14.927 "claimed": false, 00:12:14.927 "zoned": false, 00:12:14.927 "supported_io_types": { 00:12:14.927 "read": true, 00:12:14.927 "write": true, 00:12:14.927 "unmap": true, 00:12:14.927 "write_zeroes": true, 00:12:14.927 "flush": false, 00:12:14.927 "reset": true, 00:12:14.927 "compare": false, 00:12:14.927 "compare_and_write": false, 00:12:14.927 "abort": false, 00:12:14.927 "nvme_admin": false, 00:12:14.927 "nvme_io": false 00:12:14.927 }, 00:12:14.927 "driver_specific": { 00:12:14.927 "lvol": { 00:12:14.927 "lvol_store_uuid": "6813ea60-6801-43e3-8737-79bc98e585c6", 00:12:14.927 "base_bdev": "aio_bdev", 00:12:14.927 "thin_provision": false, 00:12:14.927 "num_allocated_clusters": 38, 00:12:14.927 "snapshot": false, 00:12:14.927 "clone": false, 00:12:14.927 "esnap_clone": false 00:12:14.927 } 00:12:14.927 } 00:12:14.927 } 00:12:14.927 ] 00:12:14.927 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # return 0 00:12:14.927 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:14.927 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:15.186 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:15.186 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:15.186 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:15.445 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:15.445 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete db6b1475-7008-4511-b2de-3f9cf384e085 00:12:15.702 10:52:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6813ea60-6801-43e3-8737-79bc98e585c6 00:12:15.960 10:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:16.218 10:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:16.218 00:12:16.218 real 0m20.076s 00:12:16.218 user 0m50.377s 00:12:16.218 sys 0m4.884s 00:12:16.218 10:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:16.218 10:52:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:16.218 ************************************ 00:12:16.218 END TEST lvs_grow_dirty 00:12:16.218 ************************************ 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@804 -- # type=--id 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # id=0 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # for n in $shm_files 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:16.477 nvmf_trace.0 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # return 0 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.477 rmmod nvme_tcp 00:12:16.477 rmmod nvme_fabrics 00:12:16.477 rmmod nvme_keyring 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2766184 ']' 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2766184 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@946 -- # '[' -z 2766184 ']' 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # kill -0 2766184 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # uname 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2766184 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2766184' 00:12:16.477 killing process with pid 2766184 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # kill 2766184 00:12:16.477 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # wait 2766184 00:12:16.735 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.735 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.735 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.735 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.735 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.735 10:52:32 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.735 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.735 10:52:32 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.267 10:52:34 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.267 00:12:19.267 real 0m43.554s 00:12:19.267 user 1m13.537s 00:12:19.267 sys 0m8.963s 00:12:19.267 10:52:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:19.267 10:52:34 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:19.267 ************************************ 00:12:19.267 END TEST nvmf_lvs_grow 00:12:19.267 ************************************ 00:12:19.267 10:52:34 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:19.267 10:52:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:19.267 10:52:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:19.267 10:52:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.267 ************************************ 00:12:19.267 START TEST nvmf_bdev_io_wait 00:12:19.267 ************************************ 00:12:19.267 10:52:34 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:12:19.267 * Looking for test storage... 00:12:19.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.267 10:52:35 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:21.798 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:21.798 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:21.798 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:21.798 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.798 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:12:21.799 00:12:21.799 --- 10.0.0.2 ping statistics --- 00:12:21.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.799 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:12:21.799 00:12:21.799 --- 10.0.0.1 ping statistics --- 00:12:21.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.799 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2769002 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2769002 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@827 -- # '[' -z 2769002 ']' 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:21.799 10:52:37 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:21.799 [2024-05-15 10:52:37.725693] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:21.799 [2024-05-15 10:52:37.725795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.799 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.799 [2024-05-15 10:52:37.808714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.799 [2024-05-15 10:52:37.931947] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.799 [2024-05-15 10:52:37.932023] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.799 [2024-05-15 10:52:37.932040] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.799 [2024-05-15 10:52:37.932053] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.799 [2024-05-15 10:52:37.932066] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.799 [2024-05-15 10:52:37.932131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.799 [2024-05-15 10:52:37.932160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.799 [2024-05-15 10:52:37.932213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.799 [2024-05-15 10:52:37.932216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # return 0 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.736 [2024-05-15 10:52:38.791594] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.736 Malloc0 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:22.736 [2024-05-15 10:52:38.849962] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:22.736 [2024-05-15 10:52:38.850303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2769158 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2769160 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.736 { 00:12:22.736 "params": { 00:12:22.736 "name": "Nvme$subsystem", 00:12:22.736 "trtype": "$TEST_TRANSPORT", 00:12:22.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.736 "adrfam": "ipv4", 00:12:22.736 "trsvcid": "$NVMF_PORT", 00:12:22.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.736 "hdgst": ${hdgst:-false}, 00:12:22.736 "ddgst": ${ddgst:-false} 00:12:22.736 }, 00:12:22.736 "method": "bdev_nvme_attach_controller" 00:12:22.736 } 00:12:22.736 EOF 00:12:22.736 )") 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2769162 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.736 { 00:12:22.736 "params": { 00:12:22.736 "name": "Nvme$subsystem", 00:12:22.736 "trtype": "$TEST_TRANSPORT", 00:12:22.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.736 "adrfam": "ipv4", 00:12:22.736 "trsvcid": "$NVMF_PORT", 00:12:22.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.736 "hdgst": ${hdgst:-false}, 00:12:22.736 "ddgst": ${ddgst:-false} 00:12:22.736 }, 00:12:22.736 "method": "bdev_nvme_attach_controller" 00:12:22.736 } 00:12:22.736 EOF 00:12:22.736 )") 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2769165 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.736 { 00:12:22.736 "params": { 00:12:22.736 "name": "Nvme$subsystem", 00:12:22.736 "trtype": "$TEST_TRANSPORT", 00:12:22.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.736 "adrfam": "ipv4", 00:12:22.736 "trsvcid": "$NVMF_PORT", 00:12:22.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.736 "hdgst": ${hdgst:-false}, 00:12:22.736 "ddgst": ${ddgst:-false} 00:12:22.736 }, 00:12:22.736 "method": "bdev_nvme_attach_controller" 00:12:22.736 } 00:12:22.736 EOF 00:12:22.736 )") 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:22.736 { 00:12:22.736 "params": { 00:12:22.736 "name": "Nvme$subsystem", 00:12:22.736 "trtype": "$TEST_TRANSPORT", 00:12:22.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:22.736 "adrfam": "ipv4", 00:12:22.736 "trsvcid": "$NVMF_PORT", 00:12:22.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:22.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:22.736 "hdgst": ${hdgst:-false}, 00:12:22.736 "ddgst": ${ddgst:-false} 00:12:22.736 }, 00:12:22.736 "method": "bdev_nvme_attach_controller" 00:12:22.736 } 00:12:22.736 EOF 00:12:22.736 )") 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2769158 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.736 "params": { 00:12:22.736 "name": "Nvme1", 00:12:22.736 "trtype": "tcp", 00:12:22.736 "traddr": "10.0.0.2", 00:12:22.736 "adrfam": "ipv4", 00:12:22.736 "trsvcid": "4420", 00:12:22.736 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.736 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.736 "hdgst": false, 00:12:22.736 "ddgst": false 00:12:22.736 }, 00:12:22.736 "method": "bdev_nvme_attach_controller" 00:12:22.736 }' 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:22.736 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.736 "params": { 00:12:22.736 "name": "Nvme1", 00:12:22.736 "trtype": "tcp", 00:12:22.737 "traddr": "10.0.0.2", 00:12:22.737 "adrfam": "ipv4", 00:12:22.737 "trsvcid": "4420", 00:12:22.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.737 "hdgst": false, 00:12:22.737 "ddgst": false 00:12:22.737 }, 00:12:22.737 "method": "bdev_nvme_attach_controller" 00:12:22.737 }' 00:12:22.737 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:12:22.737 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:22.737 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.737 "params": { 00:12:22.737 "name": "Nvme1", 00:12:22.737 "trtype": "tcp", 00:12:22.737 "traddr": "10.0.0.2", 00:12:22.737 "adrfam": "ipv4", 00:12:22.737 "trsvcid": "4420", 00:12:22.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.737 "hdgst": false, 00:12:22.737 "ddgst": false 00:12:22.737 }, 00:12:22.737 "method": "bdev_nvme_attach_controller" 00:12:22.737 }' 00:12:22.737 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:12:22.737 10:52:38 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:22.737 "params": { 00:12:22.737 "name": "Nvme1", 00:12:22.737 "trtype": "tcp", 00:12:22.737 "traddr": "10.0.0.2", 00:12:22.737 "adrfam": "ipv4", 00:12:22.737 "trsvcid": "4420", 00:12:22.737 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:22.737 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:22.737 "hdgst": false, 00:12:22.737 "ddgst": false 00:12:22.737 }, 00:12:22.737 "method": "bdev_nvme_attach_controller" 00:12:22.737 }' 00:12:22.737 [2024-05-15 10:52:38.895031] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:22.737 [2024-05-15 10:52:38.895031] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:22.737 [2024-05-15 10:52:38.895031] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:22.737 [2024-05-15 10:52:38.895122] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 10:52:38.895122] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-05-15 10:52:38.895123] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:12:22.737 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:12:22.737 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:12:22.737 [2024-05-15 10:52:38.896746] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:22.737 [2024-05-15 10:52:38.896818] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:12:22.737 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.995 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.995 [2024-05-15 10:52:39.085518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.995 EAL: No free 2048 kB hugepages reported on node 1 00:12:22.995 [2024-05-15 10:52:39.184293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:12:22.995 [2024-05-15 10:52:39.188771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.254 EAL: No free 2048 kB hugepages reported on node 1 00:12:23.254 [2024-05-15 10:52:39.289963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:12:23.254 [2024-05-15 10:52:39.292996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.254 [2024-05-15 10:52:39.368849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.254 [2024-05-15 10:52:39.395230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:12:23.254 [2024-05-15 10:52:39.464816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:12:23.512 Running I/O for 1 seconds... 00:12:23.512 Running I/O for 1 seconds... 00:12:23.512 Running I/O for 1 seconds... 00:12:23.512 Running I/O for 1 seconds... 00:12:24.456 00:12:24.456 Latency(us) 00:12:24.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.456 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:12:24.456 Nvme1n1 : 1.00 185096.33 723.03 0.00 0.00 688.84 265.48 1061.93 00:12:24.456 =================================================================================================================== 00:12:24.456 Total : 185096.33 723.03 0.00 0.00 688.84 265.48 1061.93 00:12:24.456 00:12:24.456 Latency(us) 00:12:24.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.456 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:12:24.456 Nvme1n1 : 1.02 6458.33 25.23 0.00 0.00 19615.82 7864.32 25243.50 00:12:24.456 =================================================================================================================== 00:12:24.456 Total : 6458.33 25.23 0.00 0.00 19615.82 7864.32 25243.50 00:12:24.456 00:12:24.456 Latency(us) 00:12:24.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.456 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:12:24.456 Nvme1n1 : 1.01 10881.84 42.51 0.00 0.00 11712.28 8495.41 18932.62 00:12:24.456 =================================================================================================================== 00:12:24.456 Total : 10881.84 42.51 0.00 0.00 11712.28 8495.41 18932.62 00:12:24.456 00:12:24.456 Latency(us) 00:12:24.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:24.456 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:12:24.456 Nvme1n1 : 1.01 8502.71 33.21 0.00 0.00 14992.85 8009.96 28932.93 00:12:24.456 =================================================================================================================== 00:12:24.456 Total : 8502.71 33.21 0.00 0.00 14992.85 8009.96 28932.93 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2769160 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2769162 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2769165 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:25.065 rmmod nvme_tcp 00:12:25.065 rmmod nvme_fabrics 00:12:25.065 rmmod nvme_keyring 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2769002 ']' 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2769002 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@946 -- # '[' -z 2769002 ']' 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # kill -0 2769002 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # uname 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2769002 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2769002' 00:12:25.065 killing process with pid 2769002 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # kill 2769002 00:12:25.065 [2024-05-15 10:52:41.133053] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:25.065 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # wait 2769002 00:12:25.324 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:25.324 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:25.324 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:25.324 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:25.324 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:25.324 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.324 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:25.324 10:52:41 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.224 10:52:43 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:27.224 00:12:27.224 real 0m8.475s 00:12:27.224 user 0m20.213s 00:12:27.224 sys 0m3.843s 00:12:27.224 10:52:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:27.224 10:52:43 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:12:27.224 ************************************ 00:12:27.224 END TEST nvmf_bdev_io_wait 00:12:27.224 ************************************ 00:12:27.482 10:52:43 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:27.482 10:52:43 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:27.482 10:52:43 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:27.482 10:52:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:27.482 ************************************ 00:12:27.482 START TEST nvmf_queue_depth 00:12:27.482 ************************************ 00:12:27.482 10:52:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:12:27.482 * Looking for test storage... 00:12:27.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.482 10:52:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.482 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:12:27.482 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:12:27.483 10:52:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:30.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:30.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:30.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:30.013 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:30.013 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:30.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.180 ms 00:12:30.014 00:12:30.014 --- 10.0.0.2 ping statistics --- 00:12:30.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.014 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:12:30.014 00:12:30.014 --- 10.0.0.1 ping statistics --- 00:12:30.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.014 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.014 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2771801 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2771801 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2771801 ']' 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:30.273 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.273 [2024-05-15 10:52:46.292969] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:30.273 [2024-05-15 10:52:46.293042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.273 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.273 [2024-05-15 10:52:46.365994] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.273 [2024-05-15 10:52:46.471076] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.273 [2024-05-15 10:52:46.471132] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.273 [2024-05-15 10:52:46.471146] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.273 [2024-05-15 10:52:46.471157] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.273 [2024-05-15 10:52:46.471166] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.273 [2024-05-15 10:52:46.471192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.532 [2024-05-15 10:52:46.608543] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.532 Malloc0 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.532 [2024-05-15 10:52:46.675279] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:30.532 [2024-05-15 10:52:46.675569] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2771821 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2771821 /var/tmp/bdevperf.sock 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@827 -- # '[' -z 2771821 ']' 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:30.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:30.532 10:52:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:30.532 [2024-05-15 10:52:46.719533] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:30.532 [2024-05-15 10:52:46.719595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2771821 ] 00:12:30.532 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.790 [2024-05-15 10:52:46.791545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.790 [2024-05-15 10:52:46.908694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.790 10:52:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:30.790 10:52:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@860 -- # return 0 00:12:30.790 10:52:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:12:30.790 10:52:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:30.790 10:52:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:31.048 NVMe0n1 00:12:31.048 10:52:47 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.048 10:52:47 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:31.048 Running I/O for 10 seconds... 00:12:43.244 00:12:43.244 Latency(us) 00:12:43.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.244 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:12:43.244 Verification LBA range: start 0x0 length 0x4000 00:12:43.244 NVMe0n1 : 10.08 8436.63 32.96 0.00 0.00 120857.79 16796.63 83497.72 00:12:43.244 =================================================================================================================== 00:12:43.244 Total : 8436.63 32.96 0.00 0.00 120857.79 16796.63 83497.72 00:12:43.244 0 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2771821 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2771821 ']' 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2771821 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2771821 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2771821' 00:12:43.244 killing process with pid 2771821 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2771821 00:12:43.244 Received shutdown signal, test time was about 10.000000 seconds 00:12:43.244 00:12:43.244 Latency(us) 00:12:43.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.244 =================================================================================================================== 00:12:43.244 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2771821 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.244 rmmod nvme_tcp 00:12:43.244 rmmod nvme_fabrics 00:12:43.244 rmmod nvme_keyring 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2771801 ']' 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2771801 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@946 -- # '[' -z 2771801 ']' 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@950 -- # kill -0 2771801 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # uname 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2771801 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2771801' 00:12:43.244 killing process with pid 2771801 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # kill 2771801 00:12:43.244 [2024-05-15 10:52:57.710476] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:43.244 10:52:57 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@970 -- # wait 2771801 00:12:43.244 10:52:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.244 10:52:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.244 10:52:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.244 10:52:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.244 10:52:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.244 10:52:58 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.244 10:52:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.244 10:52:58 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.182 10:53:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.182 00:12:44.182 real 0m16.577s 00:12:44.182 user 0m22.734s 00:12:44.182 sys 0m3.418s 00:12:44.182 10:53:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:44.182 10:53:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:12:44.182 ************************************ 00:12:44.182 END TEST nvmf_queue_depth 00:12:44.182 ************************************ 00:12:44.182 10:53:00 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:44.182 10:53:00 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:44.182 10:53:00 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:44.182 10:53:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.182 ************************************ 00:12:44.182 START TEST nvmf_target_multipath 00:12:44.182 ************************************ 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:12:44.182 * Looking for test storage... 00:12:44.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.182 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.183 10:53:00 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.717 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:46.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:46.718 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:46.718 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:46.718 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:46.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:12:46.718 00:12:46.718 --- 10.0.0.2 ping statistics --- 00:12:46.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.718 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:12:46.718 00:12:46.718 --- 10.0.0.1 ping statistics --- 00:12:46.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.718 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:12:46.718 only one NIC for nvmf test 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:46.718 rmmod nvme_tcp 00:12:46.718 rmmod nvme_fabrics 00:12:46.718 rmmod nvme_keyring 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:46.718 10:53:02 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:49.260 00:12:49.260 real 0m4.802s 00:12:49.260 user 0m1.016s 00:12:49.260 sys 0m1.807s 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # xtrace_disable 00:12:49.260 10:53:04 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 ************************************ 00:12:49.260 END TEST nvmf_target_multipath 00:12:49.260 ************************************ 00:12:49.260 10:53:04 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:49.260 10:53:04 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:12:49.260 10:53:04 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:12:49.260 10:53:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:49.260 ************************************ 00:12:49.260 START TEST nvmf_zcopy 00:12:49.260 ************************************ 00:12:49.260 10:53:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:49.260 * Looking for test storage... 00:12:49.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.260 10:53:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:12:49.261 10:53:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.849 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.849 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:51.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:51.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:51.850 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:51.850 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:12:51.850 00:12:51.850 --- 10.0.0.2 ping statistics --- 00:12:51.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.850 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:12:51.850 00:12:51.850 --- 10.0.0.1 ping statistics --- 00:12:51.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.850 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@720 -- # xtrace_disable 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2777694 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2777694 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@827 -- # '[' -z 2777694 ']' 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local max_retries=100 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # xtrace_disable 00:12:51.850 10:53:07 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:51.850 [2024-05-15 10:53:07.761479] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:51.851 [2024-05-15 10:53:07.761563] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.851 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.851 [2024-05-15 10:53:07.847598] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:51.851 [2024-05-15 10:53:07.969268] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.851 [2024-05-15 10:53:07.969326] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.851 [2024-05-15 10:53:07.969342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.851 [2024-05-15 10:53:07.969356] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.851 [2024-05-15 10:53:07.969368] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.851 [2024-05-15 10:53:07.969399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@860 -- # return 0 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:52.785 [2024-05-15 10:53:08.769227] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:52.785 [2024-05-15 10:53:08.785166] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:52.785 [2024-05-15 10:53:08.785428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:52.785 malloc0 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:12:52.785 { 00:12:52.785 "params": { 00:12:52.785 "name": "Nvme$subsystem", 00:12:52.785 "trtype": "$TEST_TRANSPORT", 00:12:52.785 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:52.785 "adrfam": "ipv4", 00:12:52.785 "trsvcid": "$NVMF_PORT", 00:12:52.785 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:52.785 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:52.785 "hdgst": ${hdgst:-false}, 00:12:52.785 "ddgst": ${ddgst:-false} 00:12:52.785 }, 00:12:52.785 "method": "bdev_nvme_attach_controller" 00:12:52.785 } 00:12:52.785 EOF 00:12:52.785 )") 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:12:52.785 10:53:08 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:12:52.785 "params": { 00:12:52.785 "name": "Nvme1", 00:12:52.785 "trtype": "tcp", 00:12:52.785 "traddr": "10.0.0.2", 00:12:52.785 "adrfam": "ipv4", 00:12:52.785 "trsvcid": "4420", 00:12:52.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:52.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:52.785 "hdgst": false, 00:12:52.785 "ddgst": false 00:12:52.785 }, 00:12:52.785 "method": "bdev_nvme_attach_controller" 00:12:52.785 }' 00:12:52.785 [2024-05-15 10:53:08.866742] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:12:52.785 [2024-05-15 10:53:08.866832] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777849 ] 00:12:52.785 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.785 [2024-05-15 10:53:08.947720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:53.043 [2024-05-15 10:53:09.068796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.302 Running I/O for 10 seconds... 00:13:03.268 00:13:03.268 Latency(us) 00:13:03.268 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:03.268 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:13:03.268 Verification LBA range: start 0x0 length 0x1000 00:13:03.268 Nvme1n1 : 10.01 5779.99 45.16 0.00 0.00 22089.80 1480.63 38253.61 00:13:03.268 =================================================================================================================== 00:13:03.268 Total : 5779.99 45.16 0.00 0.00 22089.80 1480.63 38253.61 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2779047 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:03.526 { 00:13:03.526 "params": { 00:13:03.526 "name": "Nvme$subsystem", 00:13:03.526 "trtype": "$TEST_TRANSPORT", 00:13:03.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.526 "adrfam": "ipv4", 00:13:03.526 "trsvcid": "$NVMF_PORT", 00:13:03.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.526 "hdgst": ${hdgst:-false}, 00:13:03.526 "ddgst": ${ddgst:-false} 00:13:03.526 }, 00:13:03.526 "method": "bdev_nvme_attach_controller" 00:13:03.526 } 00:13:03.526 EOF 00:13:03.526 )") 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:13:03.526 [2024-05-15 10:53:19.701793] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.526 [2024-05-15 10:53:19.701839] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:13:03.526 10:53:19 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:03.526 "params": { 00:13:03.526 "name": "Nvme1", 00:13:03.526 "trtype": "tcp", 00:13:03.526 "traddr": "10.0.0.2", 00:13:03.526 "adrfam": "ipv4", 00:13:03.526 "trsvcid": "4420", 00:13:03.526 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.526 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.526 "hdgst": false, 00:13:03.526 "ddgst": false 00:13:03.526 }, 00:13:03.526 "method": "bdev_nvme_attach_controller" 00:13:03.526 }' 00:13:03.526 [2024-05-15 10:53:19.709766] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.526 [2024-05-15 10:53:19.709794] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.526 [2024-05-15 10:53:19.717774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.526 [2024-05-15 10:53:19.717797] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.526 [2024-05-15 10:53:19.725791] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.526 [2024-05-15 10:53:19.725812] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.526 [2024-05-15 10:53:19.733812] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.526 [2024-05-15 10:53:19.733834] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.526 [2024-05-15 10:53:19.737398] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:03.526 [2024-05-15 10:53:19.737456] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2779047 ] 00:13:03.526 [2024-05-15 10:53:19.741830] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.526 [2024-05-15 10:53:19.741851] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.526 [2024-05-15 10:53:19.749852] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.526 [2024-05-15 10:53:19.749873] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.526 [2024-05-15 10:53:19.757890] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.526 [2024-05-15 10:53:19.757911] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.765895] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.765936] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.785 [2024-05-15 10:53:19.773943] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.773965] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.781970] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.782010] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.790000] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.790030] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.798019] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.798041] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.806031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.806053] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.813409] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.785 [2024-05-15 10:53:19.814059] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.814081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.822114] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.822150] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.830126] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.830159] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.838129] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.838153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.846146] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.846168] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.854166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.854188] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.862186] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.862225] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.870223] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.870245] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.878264] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.878312] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.886304] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.886340] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.894302] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.894328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.902328] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.902354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.910343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.910368] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.918365] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.918391] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.926389] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.926414] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.931785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.785 [2024-05-15 10:53:19.934415] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.934447] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.942434] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.942459] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.950486] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.950524] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.958510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.958549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.966536] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.966576] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.974558] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.974597] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.982578] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.982619] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.990600] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.990640] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:19.998595] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:19.998620] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:20.006644] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:20.006686] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:03.785 [2024-05-15 10:53:20.014668] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:03.785 [2024-05-15 10:53:20.014710] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.022664] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.022694] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.030681] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.030707] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.038705] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.038731] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.046747] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.046779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.054792] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.054820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.062780] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.062808] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.070803] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.070831] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.078862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.078891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.086849] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.086885] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.094867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.094894] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.103785] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.103816] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.110918] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.110956] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 Running I/O for 5 seconds... 00:13:04.043 [2024-05-15 10:53:20.118949] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.118990] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.136701] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.136734] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.148730] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.148761] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.160672] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.160704] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.171635] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.171667] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.183347] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.183379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.197361] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.197394] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.208015] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.208043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.219246] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.219273] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.043 [2024-05-15 10:53:20.230243] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.043 [2024-05-15 10:53:20.230271] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.044 [2024-05-15 10:53:20.240743] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.044 [2024-05-15 10:53:20.240771] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.044 [2024-05-15 10:53:20.251571] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.044 [2024-05-15 10:53:20.251598] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.044 [2024-05-15 10:53:20.263339] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.044 [2024-05-15 10:53:20.263367] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.044 [2024-05-15 10:53:20.275449] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.044 [2024-05-15 10:53:20.275476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.285774] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.285801] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.297553] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.297580] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.307657] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.307700] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.319798] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.319826] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.331103] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.331132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.344618] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.344647] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.354327] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.354356] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.365540] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.365568] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.377829] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.377856] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.389396] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.389423] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.399460] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.399488] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.409404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.409431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.419867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.419895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.430689] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.430716] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.441018] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.441046] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.452341] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.452368] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.462799] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.462827] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.473325] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.473353] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.484350] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.484379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.494190] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.494218] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.505705] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.505732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.515345] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.515371] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.302 [2024-05-15 10:53:20.526686] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.302 [2024-05-15 10:53:20.526714] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.538536] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.538563] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.550271] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.550298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.559837] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.559864] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.571199] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.571227] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.581973] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.582001] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.592492] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.592519] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.603292] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.603320] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.614093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.614121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.624941] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.624970] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.635716] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.635745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.647559] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.647587] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.658343] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.658369] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.669801] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.669830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.680141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.680170] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.691042] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.691070] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.701674] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.701713] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.713046] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.713074] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.722968] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.722997] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.734510] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.734554] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.744852] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.744879] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.755704] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.755732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.766522] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.766551] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.777353] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.777382] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.560 [2024-05-15 10:53:20.787884] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.560 [2024-05-15 10:53:20.787927] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.797963] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.797992] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.809055] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.809083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.819344] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.819372] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.831836] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.831863] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.843149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.843177] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.854092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.854121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.865786] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.865815] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.875605] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.875632] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.887438] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.887466] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.897890] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.897940] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.908605] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.908640] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.919550] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.919578] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.929843] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.929872] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.940979] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.941008] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.950740] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.950768] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.962131] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.816 [2024-05-15 10:53:20.962160] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.816 [2024-05-15 10:53:20.972305] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.817 [2024-05-15 10:53:20.972333] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.817 [2024-05-15 10:53:20.983446] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.817 [2024-05-15 10:53:20.983474] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.817 [2024-05-15 10:53:20.994228] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.817 [2024-05-15 10:53:20.994256] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.817 [2024-05-15 10:53:21.004834] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.817 [2024-05-15 10:53:21.004861] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.817 [2024-05-15 10:53:21.016118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.817 [2024-05-15 10:53:21.016147] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.817 [2024-05-15 10:53:21.027251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.817 [2024-05-15 10:53:21.027279] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.817 [2024-05-15 10:53:21.037715] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.817 [2024-05-15 10:53:21.037760] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:04.817 [2024-05-15 10:53:21.048070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:04.817 [2024-05-15 10:53:21.048099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.073 [2024-05-15 10:53:21.059254] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.073 [2024-05-15 10:53:21.059281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.073 [2024-05-15 10:53:21.069036] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.073 [2024-05-15 10:53:21.069065] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.073 [2024-05-15 10:53:21.080712] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.073 [2024-05-15 10:53:21.080740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.073 [2024-05-15 10:53:21.091014] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.073 [2024-05-15 10:53:21.091043] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.073 [2024-05-15 10:53:21.102691] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.073 [2024-05-15 10:53:21.102720] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.073 [2024-05-15 10:53:21.113245] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.073 [2024-05-15 10:53:21.113281] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.073 [2024-05-15 10:53:21.124453] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.073 [2024-05-15 10:53:21.124481] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.134840] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.134868] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.145775] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.145803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.156703] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.156731] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.168373] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.168402] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.178702] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.178731] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.190419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.190448] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.201861] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.201889] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.213467] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.213496] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.223671] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.223700] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.233683] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.233711] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.244341] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.244368] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.255204] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.255233] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.266149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.266177] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.277230] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.277257] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.287482] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.287510] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.074 [2024-05-15 10:53:21.297605] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.074 [2024-05-15 10:53:21.297633] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.307901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.307938] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.317721] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.317755] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.329480] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.329509] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.339759] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.339787] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.351268] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.351296] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.362144] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.362172] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.372543] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.372570] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.383061] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.383088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.394271] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.394298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.405337] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.405365] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.415382] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.415409] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.427628] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.427655] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.439285] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.439328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.451776] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.451803] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.461902] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.461955] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.473329] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.473357] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.486507] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.486533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.497503] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.497530] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.507761] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.507788] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.518092] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.518120] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.530442] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.530478] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.541563] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.541590] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.551244] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.551273] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.331 [2024-05-15 10:53:21.562893] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.331 [2024-05-15 10:53:21.562945] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.572478] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.572506] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.584427] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.584455] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.595625] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.595652] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.606456] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.606483] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.618061] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.618089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.627733] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.627762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.639139] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.639167] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.651033] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.651061] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.663334] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.663361] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.673327] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.673354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.686795] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.686822] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.697300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.697328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.707120] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.707148] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.718828] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.718855] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.728754] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.728781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.740386] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.740413] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.750529] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.750556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.762031] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.762059] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.772330] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.772373] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.785682] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.785709] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.796612] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.796638] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.806083] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.806111] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.589 [2024-05-15 10:53:21.818039] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.589 [2024-05-15 10:53:21.818067] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.828350] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.828377] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.838374] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.838401] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.851484] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.851511] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.863533] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.863560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.874711] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.874738] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.885125] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.885153] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.896490] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.896517] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.906647] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.906675] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.917761] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.917789] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.929289] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.929317] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.939525] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.939553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.950753] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.950781] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.962029] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.962057] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.973422] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.973449] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.984314] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.848 [2024-05-15 10:53:21.984356] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.848 [2024-05-15 10:53:21.994520] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:21.994548] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.849 [2024-05-15 10:53:22.005837] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:22.005865] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.849 [2024-05-15 10:53:22.015682] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:22.015709] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.849 [2024-05-15 10:53:22.027217] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:22.027259] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.849 [2024-05-15 10:53:22.038165] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:22.038192] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.849 [2024-05-15 10:53:22.048619] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:22.048646] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.849 [2024-05-15 10:53:22.059406] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:22.059434] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.849 [2024-05-15 10:53:22.069066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:22.069094] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:05.849 [2024-05-15 10:53:22.080431] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:05.849 [2024-05-15 10:53:22.080461] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.106 [2024-05-15 10:53:22.091011] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.106 [2024-05-15 10:53:22.091039] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.106 [2024-05-15 10:53:22.101963] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.106 [2024-05-15 10:53:22.101991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.106 [2024-05-15 10:53:22.112686] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.106 [2024-05-15 10:53:22.112713] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.106 [2024-05-15 10:53:22.124052] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.106 [2024-05-15 10:53:22.124081] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.106 [2024-05-15 10:53:22.134295] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.106 [2024-05-15 10:53:22.134323] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.106 [2024-05-15 10:53:22.145901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.106 [2024-05-15 10:53:22.145937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.106 [2024-05-15 10:53:22.157194] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.106 [2024-05-15 10:53:22.157237] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.106 [2024-05-15 10:53:22.168529] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.106 [2024-05-15 10:53:22.168556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.179633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.179659] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.189781] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.189809] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.200449] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.200477] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.212303] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.212331] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.223355] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.223384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.234901] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.234952] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.245728] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.245755] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.258994] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.259022] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.269142] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.269171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.280155] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.280184] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.291575] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.291603] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.302777] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.302805] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.313679] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.313706] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.323712] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.323740] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.107 [2024-05-15 10:53:22.335304] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.107 [2024-05-15 10:53:22.335331] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.345705] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.345732] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.356350] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.356384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.367796] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.367823] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.377380] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.377408] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.388574] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.388601] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.398752] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.398779] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.409599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.409626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.419896] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.419923] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.431277] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.431307] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.442300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.442327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.452859] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.452907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.463671] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.463699] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.474800] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.474826] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.485419] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.485447] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.495276] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.495304] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.507376] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.507403] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.517695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.517722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.529469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.529495] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.540318] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.540345] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.551438] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.551464] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.561734] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.561768] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.574799] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.574825] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.365 [2024-05-15 10:53:22.587652] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.365 [2024-05-15 10:53:22.587678] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.598452] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.598479] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.608717] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.608745] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.620333] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.620360] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.632955] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.632983] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.643887] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.643937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.654878] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.654905] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.666076] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.666104] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.677156] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.677183] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.687835] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.687862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.698981] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.699008] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.710762] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.710789] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.722475] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.722502] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.732922] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.732959] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.745022] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.745050] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.756093] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.756121] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.769288] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.769316] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.779856] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.779891] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.792631] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.792658] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.806327] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.806354] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.817785] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.817813] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.828579] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.828607] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.840054] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.623 [2024-05-15 10:53:22.840089] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.623 [2024-05-15 10:53:22.850914] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.624 [2024-05-15 10:53:22.850964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.861345] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.861373] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.872153] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.872181] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.885201] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.885243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.897743] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.897770] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.909787] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.909814] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.922489] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.922516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.934460] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.934486] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.945058] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.945085] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.956982] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.957009] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.967267] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.967294] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.978651] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.978693] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:22.989373] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:22.989400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.000149] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.000184] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.011499] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.011527] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.022319] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.022346] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.035161] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.035190] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.047767] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.047794] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.057825] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.057852] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.069066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.069094] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.079595] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.079622] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.092254] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.092282] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.103697] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.103725] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:06.882 [2024-05-15 10:53:23.113972] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:06.882 [2024-05-15 10:53:23.114000] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.125588] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.125615] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.138904] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.138964] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.150764] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.150792] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.161286] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.161312] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.174614] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.174641] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.186010] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.186037] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.197543] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.197570] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.209099] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.209127] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.219526] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.219560] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.229266] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.229293] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.241151] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.241179] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.251127] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.251154] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.262520] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.262547] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.273153] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.273180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.292644] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.292672] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.303995] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.304023] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.315870] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.315897] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.325522] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.325549] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.337055] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.337083] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.347887] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.347937] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.139 [2024-05-15 10:53:23.361135] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.139 [2024-05-15 10:53:23.361163] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.372216] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.372244] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.383420] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.383448] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.393857] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.393884] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.405472] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.405499] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.415925] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.415974] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.425118] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.425146] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.436211] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.436239] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.448112] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.448141] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.457506] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.457533] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.468141] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.468169] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.478961] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.478989] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.490382] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.490410] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.501564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.501591] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.512312] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.512339] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.523000] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.523028] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.534024] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.534052] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.543974] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.544012] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.555278] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.555306] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.566251] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.566278] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.577062] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.577091] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.587107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.587135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.598576] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.598604] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.610763] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.610790] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.397 [2024-05-15 10:53:23.623307] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.397 [2024-05-15 10:53:23.623334] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.634694] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.634722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.645296] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.645328] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.655567] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.655594] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.666404] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.666431] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.676679] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.676705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.687833] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.687862] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.698405] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.698433] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.708760] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.708788] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.721679] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.721705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.731734] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.731761] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.742170] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.742198] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.752639] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.752666] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.763408] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.763435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.773988] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.774015] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.784172] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.784200] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.794885] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.794927] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.806300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.806327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.817297] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.817324] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.829500] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.829527] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.841502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.841530] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.851850] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.851877] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.863564] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.863592] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.873880] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.873907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.655 [2024-05-15 10:53:23.885867] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.655 [2024-05-15 10:53:23.885895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.896076] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.896105] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.907925] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.907974] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.917678] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.917705] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.929847] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.929888] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.940637] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.940668] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.952357] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.952384] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.961806] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.961833] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.973717] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.973743] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.983586] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.983613] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:23.995898] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:23.995925] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.005635] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.005662] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.017794] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.017821] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.028060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.028088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.038110] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.038139] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.049993] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.050029] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.060188] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.060216] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.072088] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.072117] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.083240] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.083267] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.094300] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.094327] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.105371] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.105398] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.115690] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.115717] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.129070] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.129098] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:07.913 [2024-05-15 10:53:24.138958] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:07.913 [2024-05-15 10:53:24.138991] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.150178] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.150205] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.160408] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.160435] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.171655] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.171682] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.184259] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.184286] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.196320] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.196348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.206812] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.206843] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.217069] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.217099] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.228483] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.228511] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.239104] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.239140] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.250351] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.250379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.261241] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.261298] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.271881] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.271908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.282725] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.282754] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.293624] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.293651] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.304773] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.304799] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.314700] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.314727] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.325507] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.325534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.335869] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.335896] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.347518] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.347546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.358465] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.358492] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.368363] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.368390] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.379849] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.379876] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.390584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.390626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.171 [2024-05-15 10:53:24.401299] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.171 [2024-05-15 10:53:24.401326] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.412208] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.412235] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.423075] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.423103] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.435881] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.435908] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.447877] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.447903] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.458647] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.458673] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.469662] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.469697] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.479599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.479626] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.490709] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.490736] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.501132] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.501161] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.512514] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.512556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.522966] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.522994] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.532534] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.532561] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.543766] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.543793] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.554557] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.554584] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.567351] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.567380] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.429 [2024-05-15 10:53:24.576283] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.429 [2024-05-15 10:53:24.576310] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.430 [2024-05-15 10:53:24.589604] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.430 [2024-05-15 10:53:24.589630] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.430 [2024-05-15 10:53:24.599584] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.430 [2024-05-15 10:53:24.599612] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.430 [2024-05-15 10:53:24.610635] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.430 [2024-05-15 10:53:24.610662] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.430 [2024-05-15 10:53:24.620695] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.430 [2024-05-15 10:53:24.620722] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.430 [2024-05-15 10:53:24.632269] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.430 [2024-05-15 10:53:24.632296] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.430 [2024-05-15 10:53:24.642119] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.430 [2024-05-15 10:53:24.642148] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.430 [2024-05-15 10:53:24.653469] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.430 [2024-05-15 10:53:24.653497] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.664057] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.664085] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.674675] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.674711] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.684664] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.684691] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.694973] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.695002] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.707734] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.707762] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.718060] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.718088] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.728665] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.728692] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.738894] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.738948] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.750151] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.750180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.759806] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.759832] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.770811] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.770853] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.781800] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.781828] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.792526] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.792553] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.803984] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.804011] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.813853] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.813880] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.825158] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.825186] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.837678] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.837704] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.847067] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.847096] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.860226] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.860267] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.870864] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.870895] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.883686] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.883721] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.895143] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.895171] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.907495] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.907522] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.687 [2024-05-15 10:53:24.917856] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.687 [2024-05-15 10:53:24.917883] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:24.927880] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:24.927907] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:24.938565] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:24.938592] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:24.949803] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:24.949830] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:24.959599] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:24.959627] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:24.970519] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:24.970546] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:24.980182] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:24.980209] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:24.991331] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:24.991358] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.001764] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.001791] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.013140] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.013168] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.023744] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.023771] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.034449] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.034476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.046224] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.046266] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.056633] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.056660] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.067107] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.067135] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.077684] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.077711] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.091201] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.091243] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.102595] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.102622] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.112328] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.112355] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.123412] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.123439] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 [2024-05-15 10:53:25.133502] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.945 [2024-05-15 10:53:25.133529] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.945 00:13:08.945 Latency(us) 00:13:08.945 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:08.945 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:13:08.945 Nvme1n1 : 5.01 11523.16 90.02 0.00 0.00 11090.07 2767.08 29321.29 00:13:08.945 =================================================================================================================== 00:13:08.945 Total : 11523.16 90.02 0.00 0.00 11090.07 2767.08 29321.29 00:13:08.945 [2024-05-15 10:53:25.139792] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.946 [2024-05-15 10:53:25.139820] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.946 [2024-05-15 10:53:25.146804] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.946 [2024-05-15 10:53:25.146834] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.946 [2024-05-15 10:53:25.154820] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.946 [2024-05-15 10:53:25.154848] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.946 [2024-05-15 10:53:25.162862] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.946 [2024-05-15 10:53:25.162898] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:08.946 [2024-05-15 10:53:25.170900] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:08.946 [2024-05-15 10:53:25.170984] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.204 [2024-05-15 10:53:25.178922] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.204 [2024-05-15 10:53:25.178976] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.204 [2024-05-15 10:53:25.186948] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.204 [2024-05-15 10:53:25.187003] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.204 [2024-05-15 10:53:25.194965] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.204 [2024-05-15 10:53:25.195018] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.204 [2024-05-15 10:53:25.203019] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.204 [2024-05-15 10:53:25.203065] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.204 [2024-05-15 10:53:25.211019] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.204 [2024-05-15 10:53:25.211063] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.219066] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.219115] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.227085] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.227132] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.235100] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.235142] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.243129] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.243174] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.251110] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.251151] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.259138] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.259180] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.267170] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.267215] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.275166] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.275220] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.283153] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.283176] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.291178] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.291202] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.299198] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.299234] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.307237] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.307259] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.315276] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.315315] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.323304] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.323348] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.331355] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.331400] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.339352] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.339379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.347352] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.347379] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.355377] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.355403] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.363397] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.363422] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.371440] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.371476] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.379473] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.379516] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.387498] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.387541] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.395488] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.395514] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.403508] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.403534] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 [2024-05-15 10:53:25.411531] subsystem.c:1981:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:13:09.205 [2024-05-15 10:53:25.411556] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:09.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2779047) - No such process 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2779047 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.205 delay0 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.205 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:09.463 10:53:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.463 10:53:25 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:13:09.463 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.463 [2024-05-15 10:53:25.535851] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:16.048 Initializing NVMe Controllers 00:13:16.048 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:16.048 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:16.048 Initialization complete. Launching workers. 00:13:16.048 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 58 00:13:16.048 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 341, failed to submit 37 00:13:16.048 success 121, unsuccess 220, failed 0 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.048 rmmod nvme_tcp 00:13:16.048 rmmod nvme_fabrics 00:13:16.048 rmmod nvme_keyring 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:13:16.048 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2777694 ']' 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2777694 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@946 -- # '[' -z 2777694 ']' 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@950 -- # kill -0 2777694 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # uname 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2777694 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2777694' 00:13:16.049 killing process with pid 2777694 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # kill 2777694 00:13:16.049 [2024-05-15 10:53:31.824599] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:16.049 10:53:31 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@970 -- # wait 2777694 00:13:16.049 10:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:16.049 10:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:16.049 10:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:16.049 10:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.049 10:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.049 10:53:32 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.049 10:53:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.049 10:53:32 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:17.967 10:53:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:17.967 00:13:17.967 real 0m29.198s 00:13:17.967 user 0m42.359s 00:13:17.967 sys 0m8.729s 00:13:17.967 10:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:17.967 10:53:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:17.967 ************************************ 00:13:17.967 END TEST nvmf_zcopy 00:13:17.967 ************************************ 00:13:17.967 10:53:34 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:17.967 10:53:34 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:17.967 10:53:34 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:17.967 10:53:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:18.228 ************************************ 00:13:18.228 START TEST nvmf_nmic 00:13:18.228 ************************************ 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:13:18.229 * Looking for test storage... 00:13:18.229 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.229 10:53:34 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:20.763 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:20.763 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:20.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:20.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.763 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:20.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:13:20.764 00:13:20.764 --- 10.0.0.2 ping statistics --- 00:13:20.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.764 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:13:20.764 00:13:20.764 --- 10.0.0.1 ping statistics --- 00:13:20.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.764 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2782727 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2782727 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@827 -- # '[' -z 2782727 ']' 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:20.764 10:53:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:20.764 [2024-05-15 10:53:36.903883] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:20.764 [2024-05-15 10:53:36.903972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.764 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.764 [2024-05-15 10:53:36.990891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:21.023 [2024-05-15 10:53:37.114638] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:21.023 [2024-05-15 10:53:37.114691] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:21.023 [2024-05-15 10:53:37.114706] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:21.023 [2024-05-15 10:53:37.114720] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:21.023 [2024-05-15 10:53:37.114732] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:21.023 [2024-05-15 10:53:37.114815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:21.023 [2024-05-15 10:53:37.114869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:21.023 [2024-05-15 10:53:37.114918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:21.023 [2024-05-15 10:53:37.114921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.023 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:21.023 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@860 -- # return 0 00:13:21.023 10:53:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.023 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:21.023 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.281 [2024-05-15 10:53:37.276858] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.281 Malloc0 00:13:21.281 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.282 [2024-05-15 10:53:37.330162] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:21.282 [2024-05-15 10:53:37.330462] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:13:21.282 test case1: single bdev can't be used in multiple subsystems 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.282 [2024-05-15 10:53:37.354290] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:13:21.282 [2024-05-15 10:53:37.354320] subsystem.c:2015:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:13:21.282 [2024-05-15 10:53:37.354335] nvmf_rpc.c:1531:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:21.282 request: 00:13:21.282 { 00:13:21.282 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:21.282 "namespace": { 00:13:21.282 "bdev_name": "Malloc0", 00:13:21.282 "no_auto_visible": false 00:13:21.282 }, 00:13:21.282 "method": "nvmf_subsystem_add_ns", 00:13:21.282 "req_id": 1 00:13:21.282 } 00:13:21.282 Got JSON-RPC error response 00:13:21.282 response: 00:13:21.282 { 00:13:21.282 "code": -32602, 00:13:21.282 "message": "Invalid parameters" 00:13:21.282 } 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:13:21.282 Adding namespace failed - expected result. 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:13:21.282 test case2: host connect to nvmf target in multiple paths 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:21.282 [2024-05-15 10:53:37.362406] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:21.282 10:53:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:21.848 10:53:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:13:22.415 10:53:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:13:22.415 10:53:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1194 -- # local i=0 00:13:22.415 10:53:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.415 10:53:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # [[ -n '' ]] 00:13:22.415 10:53:38 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1201 -- # sleep 2 00:13:24.941 10:53:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:24.941 10:53:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:24.942 10:53:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.942 10:53:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # nvme_devices=1 00:13:24.942 10:53:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.942 10:53:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # return 0 00:13:24.942 10:53:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:24.942 [global] 00:13:24.942 thread=1 00:13:24.942 invalidate=1 00:13:24.942 rw=write 00:13:24.942 time_based=1 00:13:24.942 runtime=1 00:13:24.942 ioengine=libaio 00:13:24.942 direct=1 00:13:24.942 bs=4096 00:13:24.942 iodepth=1 00:13:24.942 norandommap=0 00:13:24.942 numjobs=1 00:13:24.942 00:13:24.942 verify_dump=1 00:13:24.942 verify_backlog=512 00:13:24.942 verify_state_save=0 00:13:24.942 do_verify=1 00:13:24.942 verify=crc32c-intel 00:13:24.942 [job0] 00:13:24.942 filename=/dev/nvme0n1 00:13:24.942 Could not set queue depth (nvme0n1) 00:13:24.942 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:24.942 fio-3.35 00:13:24.942 Starting 1 thread 00:13:25.874 00:13:25.874 job0: (groupid=0, jobs=1): err= 0: pid=2783356: Wed May 15 10:53:41 2024 00:13:25.874 read: IOPS=18, BW=73.1KiB/s (74.9kB/s)(76.0KiB/1039msec) 00:13:25.874 slat (nsec): min=12323, max=41905, avg=18903.26, stdev=9088.35 00:13:25.874 clat (usec): min=40888, max=41042, avg=40971.83, stdev=41.29 00:13:25.874 lat (usec): min=40920, max=41056, avg=40990.73, stdev=38.15 00:13:25.874 clat percentiles (usec): 00:13:25.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:25.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:25.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:13:25.874 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:25.874 | 99.99th=[41157] 00:13:25.874 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:13:25.874 slat (usec): min=8, max=30602, avg=89.54, stdev=1351.19 00:13:25.874 clat (usec): min=307, max=554, avg=411.01, stdev=38.49 00:13:25.874 lat (usec): min=316, max=31047, avg=500.55, stdev=1353.46 00:13:25.874 clat percentiles (usec): 00:13:25.874 | 1.00th=[ 334], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 375], 00:13:25.874 | 30.00th=[ 383], 40.00th=[ 396], 50.00th=[ 408], 60.00th=[ 429], 00:13:25.874 | 70.00th=[ 441], 80.00th=[ 449], 90.00th=[ 453], 95.00th=[ 465], 00:13:25.874 | 99.00th=[ 490], 99.50th=[ 515], 99.90th=[ 553], 99.95th=[ 553], 00:13:25.874 | 99.99th=[ 553] 00:13:25.874 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:13:25.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:25.874 lat (usec) : 500=95.67%, 750=0.75% 00:13:25.874 lat (msec) : 50=3.58% 00:13:25.874 cpu : usr=0.67%, sys=1.45%, ctx=533, majf=0, minf=2 00:13:25.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:25.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:25.874 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:25.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:25.874 00:13:25.874 Run status group 0 (all jobs): 00:13:25.874 READ: bw=73.1KiB/s (74.9kB/s), 73.1KiB/s-73.1KiB/s (74.9kB/s-74.9kB/s), io=76.0KiB (77.8kB), run=1039-1039msec 00:13:25.874 WRITE: bw=1971KiB/s (2018kB/s), 1971KiB/s-1971KiB/s (2018kB/s-2018kB/s), io=2048KiB (2097kB), run=1039-1039msec 00:13:25.874 00:13:25.874 Disk stats (read/write): 00:13:25.874 nvme0n1: ios=40/512, merge=0/0, ticks=1574/196, in_queue=1770, util=98.70% 00:13:25.874 10:53:41 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1215 -- # local i=0 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # return 0 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:25.874 rmmod nvme_tcp 00:13:25.874 rmmod nvme_fabrics 00:13:25.874 rmmod nvme_keyring 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2782727 ']' 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2782727 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@946 -- # '[' -z 2782727 ']' 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@950 -- # kill -0 2782727 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # uname 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:25.874 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2782727 00:13:26.132 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:26.133 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:26.133 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2782727' 00:13:26.133 killing process with pid 2782727 00:13:26.133 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # kill 2782727 00:13:26.133 [2024-05-15 10:53:42.128704] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:26.133 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@970 -- # wait 2782727 00:13:26.392 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.392 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.392 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.392 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.392 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.392 10:53:42 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.392 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.392 10:53:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.296 10:53:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:28.296 00:13:28.296 real 0m10.252s 00:13:28.296 user 0m22.168s 00:13:28.296 sys 0m2.561s 00:13:28.296 10:53:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:28.296 10:53:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:13:28.296 ************************************ 00:13:28.296 END TEST nvmf_nmic 00:13:28.296 ************************************ 00:13:28.296 10:53:44 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:28.296 10:53:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:28.296 10:53:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:28.296 10:53:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:28.555 ************************************ 00:13:28.555 START TEST nvmf_fio_target 00:13:28.555 ************************************ 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:13:28.555 * Looking for test storage... 00:13:28.555 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.555 10:53:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:31.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:31.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.088 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:31.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:31.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:31.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:13:31.089 00:13:31.089 --- 10.0.0.2 ping statistics --- 00:13:31.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.089 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:13:31.089 00:13:31.089 --- 10.0.0.1 ping statistics --- 00:13:31.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.089 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2785721 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2785721 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@827 -- # '[' -z 2785721 ']' 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:31.089 10:53:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.089 [2024-05-15 10:53:47.296188] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:31.089 [2024-05-15 10:53:47.296294] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.347 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.347 [2024-05-15 10:53:47.381511] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.347 [2024-05-15 10:53:47.505832] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.348 [2024-05-15 10:53:47.505880] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.348 [2024-05-15 10:53:47.505897] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.348 [2024-05-15 10:53:47.505922] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.348 [2024-05-15 10:53:47.505942] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.348 [2024-05-15 10:53:47.506014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.348 [2024-05-15 10:53:47.506048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.348 [2024-05-15 10:53:47.506079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.348 [2024-05-15 10:53:47.506736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.282 10:53:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:32.282 10:53:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@860 -- # return 0 00:13:32.282 10:53:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.282 10:53:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:32.282 10:53:48 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.282 10:53:48 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.282 10:53:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:32.539 [2024-05-15 10:53:48.559747] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.539 10:53:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:32.798 10:53:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:13:32.798 10:53:48 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:33.056 10:53:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:13:33.056 10:53:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:33.314 10:53:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:13:33.315 10:53:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:33.573 10:53:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:13:33.573 10:53:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:13:33.830 10:53:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.095 10:53:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:13:34.095 10:53:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.386 10:53:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:13:34.386 10:53:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:34.644 10:53:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:13:34.644 10:53:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:13:34.902 10:53:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:35.159 10:53:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:35.159 10:53:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:35.417 10:53:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:13:35.417 10:53:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:35.675 10:53:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:35.675 [2024-05-15 10:53:51.892691] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:35.675 [2024-05-15 10:53:51.893030] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:35.932 10:53:51 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:13:35.932 10:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:13:36.189 10:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:36.753 10:53:52 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:13:36.753 10:53:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1194 -- # local i=0 00:13:36.753 10:53:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local nvme_device_counter=1 nvme_devices=0 00:13:36.753 10:53:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # [[ -n 4 ]] 00:13:36.753 10:53:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # nvme_device_counter=4 00:13:36.753 10:53:52 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # sleep 2 00:13:39.274 10:53:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # (( i++ <= 15 )) 00:13:39.274 10:53:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # lsblk -l -o NAME,SERIAL 00:13:39.274 10:53:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.274 10:53:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_devices=4 00:13:39.274 10:53:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.274 10:53:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # return 0 00:13:39.274 10:53:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:13:39.274 [global] 00:13:39.274 thread=1 00:13:39.274 invalidate=1 00:13:39.274 rw=write 00:13:39.274 time_based=1 00:13:39.274 runtime=1 00:13:39.274 ioengine=libaio 00:13:39.274 direct=1 00:13:39.274 bs=4096 00:13:39.274 iodepth=1 00:13:39.274 norandommap=0 00:13:39.274 numjobs=1 00:13:39.274 00:13:39.274 verify_dump=1 00:13:39.274 verify_backlog=512 00:13:39.274 verify_state_save=0 00:13:39.274 do_verify=1 00:13:39.274 verify=crc32c-intel 00:13:39.274 [job0] 00:13:39.274 filename=/dev/nvme0n1 00:13:39.274 [job1] 00:13:39.274 filename=/dev/nvme0n2 00:13:39.274 [job2] 00:13:39.274 filename=/dev/nvme0n3 00:13:39.274 [job3] 00:13:39.274 filename=/dev/nvme0n4 00:13:39.274 Could not set queue depth (nvme0n1) 00:13:39.274 Could not set queue depth (nvme0n2) 00:13:39.274 Could not set queue depth (nvme0n3) 00:13:39.274 Could not set queue depth (nvme0n4) 00:13:39.274 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:39.274 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:39.274 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:39.274 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:39.274 fio-3.35 00:13:39.274 Starting 4 threads 00:13:40.203 00:13:40.203 job0: (groupid=0, jobs=1): err= 0: pid=2786804: Wed May 15 10:53:56 2024 00:13:40.203 read: IOPS=508, BW=2035KiB/s (2084kB/s)(2076KiB/1020msec) 00:13:40.203 slat (nsec): min=8370, max=65611, avg=31403.84, stdev=6315.48 00:13:40.203 clat (usec): min=401, max=42094, avg=1051.97, stdev=4688.93 00:13:40.203 lat (usec): min=433, max=42108, avg=1083.37, stdev=4687.25 00:13:40.203 clat percentiles (usec): 00:13:40.203 | 1.00th=[ 433], 5.00th=[ 441], 10.00th=[ 449], 20.00th=[ 465], 00:13:40.203 | 30.00th=[ 478], 40.00th=[ 490], 50.00th=[ 506], 60.00th=[ 515], 00:13:40.203 | 70.00th=[ 523], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 594], 00:13:40.203 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:40.203 | 99.99th=[42206] 00:13:40.203 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:13:40.203 slat (usec): min=8, max=260, avg=27.78, stdev=14.20 00:13:40.203 clat (usec): min=241, max=1106, avg=408.14, stdev=163.94 00:13:40.203 lat (usec): min=251, max=1146, avg=435.92, stdev=166.51 00:13:40.203 clat percentiles (usec): 00:13:40.203 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 302], 00:13:40.203 | 30.00th=[ 322], 40.00th=[ 338], 50.00th=[ 367], 60.00th=[ 388], 00:13:40.203 | 70.00th=[ 400], 80.00th=[ 437], 90.00th=[ 685], 95.00th=[ 816], 00:13:40.203 | 99.00th=[ 971], 99.50th=[ 1012], 99.90th=[ 1106], 99.95th=[ 1106], 00:13:40.203 | 99.99th=[ 1106] 00:13:40.203 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=2 00:13:40.203 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:13:40.203 lat (usec) : 250=1.04%, 500=70.45%, 750=22.29%, 1000=5.31% 00:13:40.203 lat (msec) : 2=0.45%, 50=0.45% 00:13:40.203 cpu : usr=2.26%, sys=4.42%, ctx=1546, majf=0, minf=1 00:13:40.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:40.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.203 issued rwts: total=519,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:40.203 job1: (groupid=0, jobs=1): err= 0: pid=2786805: Wed May 15 10:53:56 2024 00:13:40.203 read: IOPS=430, BW=1722KiB/s (1763kB/s)(1732KiB/1006msec) 00:13:40.203 slat (nsec): min=6251, max=33696, avg=14536.83, stdev=5144.58 00:13:40.203 clat (usec): min=471, max=42593, avg=1901.88, stdev=7277.84 00:13:40.203 lat (usec): min=483, max=42605, avg=1916.42, stdev=7280.31 00:13:40.203 clat percentiles (usec): 00:13:40.203 | 1.00th=[ 486], 5.00th=[ 498], 10.00th=[ 510], 20.00th=[ 515], 00:13:40.203 | 30.00th=[ 523], 40.00th=[ 529], 50.00th=[ 529], 60.00th=[ 537], 00:13:40.203 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 570], 95.00th=[ 594], 00:13:40.204 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:13:40.204 | 99.99th=[42730] 00:13:40.204 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:13:40.204 slat (nsec): min=6525, max=63856, avg=16500.77, stdev=9736.65 00:13:40.204 clat (usec): min=242, max=1152, avg=318.24, stdev=83.67 00:13:40.204 lat (usec): min=249, max=1180, avg=334.74, stdev=84.59 00:13:40.204 clat percentiles (usec): 00:13:40.204 | 1.00th=[ 245], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 265], 00:13:40.204 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 310], 00:13:40.204 | 70.00th=[ 338], 80.00th=[ 375], 90.00th=[ 412], 95.00th=[ 441], 00:13:40.204 | 99.00th=[ 685], 99.50th=[ 742], 99.90th=[ 1156], 99.95th=[ 1156], 00:13:40.204 | 99.99th=[ 1156] 00:13:40.204 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:13:40.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:40.204 lat (usec) : 250=2.22%, 500=53.33%, 750=42.65%, 1000=0.11% 00:13:40.204 lat (msec) : 2=0.11%, 50=1.59% 00:13:40.204 cpu : usr=1.00%, sys=1.29%, ctx=946, majf=0, minf=2 00:13:40.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:40.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.204 issued rwts: total=433,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:40.204 job2: (groupid=0, jobs=1): err= 0: pid=2786806: Wed May 15 10:53:56 2024 00:13:40.204 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:13:40.204 slat (nsec): min=8612, max=36078, avg=26367.90, stdev=10841.27 00:13:40.204 clat (usec): min=40888, max=42043, avg=41069.23, stdev=321.30 00:13:40.204 lat (usec): min=40924, max=42058, avg=41095.59, stdev=316.23 00:13:40.204 clat percentiles (usec): 00:13:40.204 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:13:40.204 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:40.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:40.204 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:40.204 | 99.99th=[42206] 00:13:40.204 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:13:40.204 slat (nsec): min=7698, max=64312, avg=15902.62, stdev=8464.77 00:13:40.204 clat (usec): min=254, max=501, avg=306.37, stdev=41.92 00:13:40.204 lat (usec): min=263, max=511, avg=322.27, stdev=43.81 00:13:40.204 clat percentiles (usec): 00:13:40.204 | 1.00th=[ 262], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:13:40.204 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 302], 00:13:40.204 | 70.00th=[ 310], 80.00th=[ 322], 90.00th=[ 367], 95.00th=[ 408], 00:13:40.204 | 99.00th=[ 445], 99.50th=[ 465], 99.90th=[ 502], 99.95th=[ 502], 00:13:40.204 | 99.99th=[ 502] 00:13:40.204 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:13:40.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:40.204 lat (usec) : 500=95.87%, 750=0.19% 00:13:40.204 lat (msec) : 50=3.94% 00:13:40.204 cpu : usr=0.78%, sys=0.78%, ctx=535, majf=0, minf=1 00:13:40.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:40.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.204 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:40.204 job3: (groupid=0, jobs=1): err= 0: pid=2786807: Wed May 15 10:53:56 2024 00:13:40.204 read: IOPS=18, BW=73.3KiB/s (75.0kB/s)(76.0KiB/1037msec) 00:13:40.204 slat (nsec): min=11767, max=36079, avg=27419.26, stdev=10679.90 00:13:40.204 clat (usec): min=40829, max=41990, avg=41084.36, stdev=314.47 00:13:40.204 lat (usec): min=40843, max=42026, avg=41111.78, stdev=316.87 00:13:40.204 clat percentiles (usec): 00:13:40.204 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:13:40.204 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:40.204 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:13:40.204 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:40.204 | 99.99th=[42206] 00:13:40.204 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:13:40.204 slat (usec): min=6, max=42346, avg=111.57, stdev=1871.27 00:13:40.204 clat (usec): min=242, max=901, avg=381.27, stdev=90.26 00:13:40.204 lat (usec): min=250, max=42743, avg=492.85, stdev=1874.18 00:13:40.204 clat percentiles (usec): 00:13:40.204 | 1.00th=[ 253], 5.00th=[ 265], 10.00th=[ 281], 20.00th=[ 306], 00:13:40.204 | 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 379], 60.00th=[ 396], 00:13:40.204 | 70.00th=[ 416], 80.00th=[ 449], 90.00th=[ 486], 95.00th=[ 515], 00:13:40.204 | 99.00th=[ 693], 99.50th=[ 857], 99.90th=[ 906], 99.95th=[ 906], 00:13:40.204 | 99.99th=[ 906] 00:13:40.204 bw ( KiB/s): min= 4096, max= 4096, per=41.48%, avg=4096.00, stdev= 0.00, samples=1 00:13:40.204 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:40.204 lat (usec) : 250=0.75%, 500=89.08%, 750=5.65%, 1000=0.94% 00:13:40.204 lat (msec) : 50=3.58% 00:13:40.204 cpu : usr=0.87%, sys=0.97%, ctx=535, majf=0, minf=1 00:13:40.204 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:40.204 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.204 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:40.204 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:40.204 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:40.204 00:13:40.204 Run status group 0 (all jobs): 00:13:40.204 READ: bw=3826KiB/s (3918kB/s), 73.3KiB/s-2035KiB/s (75.0kB/s-2084kB/s), io=3968KiB (4063kB), run=1006-1037msec 00:13:40.204 WRITE: bw=9875KiB/s (10.1MB/s), 1975KiB/s-4016KiB/s (2022kB/s-4112kB/s), io=10.0MiB (10.5MB), run=1006-1037msec 00:13:40.204 00:13:40.204 Disk stats (read/write): 00:13:40.204 nvme0n1: ios=560/1024, merge=0/0, ticks=385/387, in_queue=772, util=87.58% 00:13:40.204 nvme0n2: ios=281/512, merge=0/0, ticks=808/159, in_queue=967, util=89.73% 00:13:40.204 nvme0n3: ios=73/512, merge=0/0, ticks=895/155, in_queue=1050, util=93.63% 00:13:40.204 nvme0n4: ios=81/512, merge=0/0, ticks=976/189, in_queue=1165, util=96.21% 00:13:40.204 10:53:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:13:40.462 [global] 00:13:40.462 thread=1 00:13:40.462 invalidate=1 00:13:40.462 rw=randwrite 00:13:40.462 time_based=1 00:13:40.462 runtime=1 00:13:40.462 ioengine=libaio 00:13:40.462 direct=1 00:13:40.462 bs=4096 00:13:40.462 iodepth=1 00:13:40.462 norandommap=0 00:13:40.462 numjobs=1 00:13:40.462 00:13:40.462 verify_dump=1 00:13:40.462 verify_backlog=512 00:13:40.462 verify_state_save=0 00:13:40.462 do_verify=1 00:13:40.462 verify=crc32c-intel 00:13:40.462 [job0] 00:13:40.462 filename=/dev/nvme0n1 00:13:40.462 [job1] 00:13:40.462 filename=/dev/nvme0n2 00:13:40.462 [job2] 00:13:40.462 filename=/dev/nvme0n3 00:13:40.462 [job3] 00:13:40.462 filename=/dev/nvme0n4 00:13:40.462 Could not set queue depth (nvme0n1) 00:13:40.462 Could not set queue depth (nvme0n2) 00:13:40.462 Could not set queue depth (nvme0n3) 00:13:40.462 Could not set queue depth (nvme0n4) 00:13:40.462 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.462 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.462 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.462 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:40.462 fio-3.35 00:13:40.462 Starting 4 threads 00:13:41.835 00:13:41.835 job0: (groupid=0, jobs=1): err= 0: pid=2787157: Wed May 15 10:53:57 2024 00:13:41.835 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:13:41.835 slat (nsec): min=14412, max=34582, avg=24230.29, stdev=8784.20 00:13:41.835 clat (usec): min=536, max=43912, avg=39245.63, stdev=8894.81 00:13:41.835 lat (usec): min=554, max=43932, avg=39269.86, stdev=8895.89 00:13:41.835 clat percentiles (usec): 00:13:41.835 | 1.00th=[ 537], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:41.835 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:41.835 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:41.835 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:13:41.835 | 99.99th=[43779] 00:13:41.835 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:13:41.835 slat (nsec): min=10018, max=64932, avg=26887.36, stdev=8543.26 00:13:41.835 clat (usec): min=261, max=515, avg=322.71, stdev=44.17 00:13:41.835 lat (usec): min=274, max=552, avg=349.60, stdev=47.77 00:13:41.835 clat percentiles (usec): 00:13:41.835 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 289], 00:13:41.835 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 306], 60.00th=[ 314], 00:13:41.835 | 70.00th=[ 326], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 416], 00:13:41.835 | 99.00th=[ 465], 99.50th=[ 474], 99.90th=[ 515], 99.95th=[ 515], 00:13:41.835 | 99.99th=[ 515] 00:13:41.835 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:41.835 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:41.835 lat (usec) : 500=95.68%, 750=0.56% 00:13:41.835 lat (msec) : 50=3.75% 00:13:41.835 cpu : usr=1.49%, sys=1.19%, ctx=533, majf=0, minf=1 00:13:41.835 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.835 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.835 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.835 job1: (groupid=0, jobs=1): err= 0: pid=2787158: Wed May 15 10:53:57 2024 00:13:41.835 read: IOPS=18, BW=74.5KiB/s (76.3kB/s)(76.0KiB/1020msec) 00:13:41.835 slat (nsec): min=15078, max=48054, avg=24748.00, stdev=10135.90 00:13:41.835 clat (usec): min=40886, max=42039, avg=41048.35, stdev=273.53 00:13:41.835 lat (usec): min=40924, max=42062, avg=41073.09, stdev=271.53 00:13:41.835 clat percentiles (usec): 00:13:41.835 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:13:41.835 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:41.835 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:13:41.835 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:41.835 | 99.99th=[42206] 00:13:41.835 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:13:41.835 slat (nsec): min=12648, max=64799, avg=32263.78, stdev=8959.36 00:13:41.835 clat (usec): min=275, max=600, avg=425.97, stdev=84.84 00:13:41.835 lat (usec): min=297, max=641, avg=458.23, stdev=88.49 00:13:41.835 clat percentiles (usec): 00:13:41.835 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 302], 20.00th=[ 322], 00:13:41.835 | 30.00th=[ 383], 40.00th=[ 412], 50.00th=[ 445], 60.00th=[ 461], 00:13:41.835 | 70.00th=[ 482], 80.00th=[ 502], 90.00th=[ 537], 95.00th=[ 553], 00:13:41.835 | 99.00th=[ 570], 99.50th=[ 578], 99.90th=[ 603], 99.95th=[ 603], 00:13:41.835 | 99.99th=[ 603] 00:13:41.836 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:41.836 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:41.836 lat (usec) : 500=76.65%, 750=19.77% 00:13:41.836 lat (msec) : 50=3.58% 00:13:41.836 cpu : usr=1.28%, sys=1.86%, ctx=534, majf=0, minf=2 00:13:41.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.836 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.836 job2: (groupid=0, jobs=1): err= 0: pid=2787159: Wed May 15 10:53:57 2024 00:13:41.836 read: IOPS=513, BW=2054KiB/s (2103kB/s)(2072KiB/1009msec) 00:13:41.836 slat (nsec): min=7722, max=70211, avg=20992.19, stdev=7797.61 00:13:41.836 clat (usec): min=517, max=41374, avg=1070.35, stdev=4329.46 00:13:41.836 lat (usec): min=528, max=41407, avg=1091.35, stdev=4329.35 00:13:41.836 clat percentiles (usec): 00:13:41.836 | 1.00th=[ 529], 5.00th=[ 545], 10.00th=[ 553], 20.00th=[ 570], 00:13:41.836 | 30.00th=[ 578], 40.00th=[ 586], 50.00th=[ 594], 60.00th=[ 603], 00:13:41.836 | 70.00th=[ 619], 80.00th=[ 627], 90.00th=[ 652], 95.00th=[ 693], 00:13:41.836 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:13:41.836 | 99.99th=[41157] 00:13:41.836 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:13:41.836 slat (nsec): min=9108, max=76707, avg=30717.89, stdev=11850.22 00:13:41.836 clat (usec): min=241, max=654, avg=391.14, stdev=69.01 00:13:41.836 lat (usec): min=267, max=695, avg=421.86, stdev=75.41 00:13:41.836 clat percentiles (usec): 00:13:41.836 | 1.00th=[ 265], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 326], 00:13:41.836 | 30.00th=[ 347], 40.00th=[ 371], 50.00th=[ 388], 60.00th=[ 404], 00:13:41.836 | 70.00th=[ 420], 80.00th=[ 445], 90.00th=[ 482], 95.00th=[ 515], 00:13:41.836 | 99.00th=[ 586], 99.50th=[ 594], 99.90th=[ 644], 99.95th=[ 652], 00:13:41.836 | 99.99th=[ 652] 00:13:41.836 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=2 00:13:41.836 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:13:41.836 lat (usec) : 250=0.06%, 500=61.41%, 750=37.48%, 1000=0.65% 00:13:41.836 lat (msec) : 50=0.39% 00:13:41.836 cpu : usr=2.58%, sys=5.85%, ctx=1543, majf=0, minf=1 00:13:41.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.836 issued rwts: total=518,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.836 job3: (groupid=0, jobs=1): err= 0: pid=2787160: Wed May 15 10:53:57 2024 00:13:41.836 read: IOPS=20, BW=82.8KiB/s (84.8kB/s)(84.0KiB/1014msec) 00:13:41.836 slat (nsec): min=16388, max=36259, avg=26247.14, stdev=9292.49 00:13:41.836 clat (usec): min=501, max=41981, avg=39134.57, stdev=8857.20 00:13:41.836 lat (usec): min=537, max=41998, avg=39160.82, stdev=8855.04 00:13:41.836 clat percentiles (usec): 00:13:41.836 | 1.00th=[ 502], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:13:41.836 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:13:41.836 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:41.836 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:13:41.836 | 99.99th=[42206] 00:13:41.836 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:13:41.836 slat (nsec): min=10144, max=63529, avg=30558.90, stdev=11320.13 00:13:41.836 clat (usec): min=258, max=559, avg=334.74, stdev=59.39 00:13:41.836 lat (usec): min=270, max=602, avg=365.30, stdev=63.42 00:13:41.836 clat percentiles (usec): 00:13:41.836 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 269], 20.00th=[ 281], 00:13:41.836 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 326], 60.00th=[ 343], 00:13:41.836 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 416], 95.00th=[ 445], 00:13:41.836 | 99.00th=[ 502], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 562], 00:13:41.836 | 99.99th=[ 562] 00:13:41.836 bw ( KiB/s): min= 4096, max= 4096, per=40.80%, avg=4096.00, stdev= 0.00, samples=1 00:13:41.836 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:13:41.836 lat (usec) : 500=94.75%, 750=1.50% 00:13:41.836 lat (msec) : 50=3.75% 00:13:41.836 cpu : usr=0.59%, sys=1.68%, ctx=534, majf=0, minf=1 00:13:41.836 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:41.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:41.836 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:41.836 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:41.836 00:13:41.836 Run status group 0 (all jobs): 00:13:41.836 READ: bw=2271KiB/s (2325kB/s), 74.5KiB/s-2054KiB/s (76.3kB/s-2103kB/s), io=2316KiB (2372kB), run=1008-1020msec 00:13:41.836 WRITE: bw=9.80MiB/s (10.3MB/s), 2008KiB/s-4059KiB/s (2056kB/s-4157kB/s), io=10.0MiB (10.5MB), run=1008-1020msec 00:13:41.836 00:13:41.836 Disk stats (read/write): 00:13:41.836 nvme0n1: ios=66/512, merge=0/0, ticks=699/145, in_queue=844, util=87.47% 00:13:41.836 nvme0n2: ios=39/512, merge=0/0, ticks=1566/215, in_queue=1781, util=97.86% 00:13:41.836 nvme0n3: ios=560/1024, merge=0/0, ticks=1386/374, in_queue=1760, util=98.53% 00:13:41.836 nvme0n4: ios=60/512, merge=0/0, ticks=1607/161, in_queue=1768, util=99.15% 00:13:41.836 10:53:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:13:41.836 [global] 00:13:41.836 thread=1 00:13:41.836 invalidate=1 00:13:41.836 rw=write 00:13:41.836 time_based=1 00:13:41.836 runtime=1 00:13:41.836 ioengine=libaio 00:13:41.836 direct=1 00:13:41.836 bs=4096 00:13:41.836 iodepth=128 00:13:41.836 norandommap=0 00:13:41.836 numjobs=1 00:13:41.836 00:13:41.836 verify_dump=1 00:13:41.836 verify_backlog=512 00:13:41.836 verify_state_save=0 00:13:41.836 do_verify=1 00:13:41.836 verify=crc32c-intel 00:13:41.836 [job0] 00:13:41.836 filename=/dev/nvme0n1 00:13:41.836 [job1] 00:13:41.836 filename=/dev/nvme0n2 00:13:41.836 [job2] 00:13:41.836 filename=/dev/nvme0n3 00:13:41.836 [job3] 00:13:41.836 filename=/dev/nvme0n4 00:13:41.836 Could not set queue depth (nvme0n1) 00:13:41.836 Could not set queue depth (nvme0n2) 00:13:41.836 Could not set queue depth (nvme0n3) 00:13:41.836 Could not set queue depth (nvme0n4) 00:13:42.095 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:42.095 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:42.095 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:42.095 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:42.095 fio-3.35 00:13:42.095 Starting 4 threads 00:13:43.470 00:13:43.470 job0: (groupid=0, jobs=1): err= 0: pid=2787383: Wed May 15 10:53:59 2024 00:13:43.470 read: IOPS=3032, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1013msec) 00:13:43.470 slat (usec): min=3, max=19039, avg=150.50, stdev=1065.11 00:13:43.470 clat (usec): min=9507, max=76056, avg=18098.99, stdev=8588.64 00:13:43.470 lat (usec): min=9539, max=76096, avg=18249.49, stdev=8709.27 00:13:43.470 clat percentiles (usec): 00:13:43.470 | 1.00th=[ 9896], 5.00th=[11207], 10.00th=[12649], 20.00th=[13042], 00:13:43.470 | 30.00th=[13566], 40.00th=[14222], 50.00th=[14615], 60.00th=[15401], 00:13:43.470 | 70.00th=[17433], 80.00th=[23200], 90.00th=[30278], 95.00th=[33162], 00:13:43.470 | 99.00th=[55313], 99.50th=[64226], 99.90th=[76022], 99.95th=[76022], 00:13:43.470 | 99.99th=[76022] 00:13:43.470 write: IOPS=3346, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1013msec); 0 zone resets 00:13:43.470 slat (usec): min=4, max=23080, avg=149.05, stdev=886.32 00:13:43.470 clat (usec): min=6365, max=76055, avg=20617.00, stdev=11272.80 00:13:43.470 lat (usec): min=6374, max=76075, avg=20766.05, stdev=11329.43 00:13:43.470 clat percentiles (usec): 00:13:43.470 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[12780], 00:13:43.470 | 30.00th=[14484], 40.00th=[15270], 50.00th=[17433], 60.00th=[21365], 00:13:43.470 | 70.00th=[23200], 80.00th=[25822], 90.00th=[27395], 95.00th=[49546], 00:13:43.470 | 99.00th=[65274], 99.50th=[65274], 99.90th=[65799], 99.95th=[76022], 00:13:43.470 | 99.99th=[76022] 00:13:43.470 bw ( KiB/s): min=12288, max=13816, per=26.17%, avg=13052.00, stdev=1080.46, samples=2 00:13:43.470 iops : min= 3072, max= 3454, avg=3263.00, stdev=270.11, samples=2 00:13:43.470 lat (msec) : 10=4.47%, 20=61.03%, 50=31.43%, 100=3.06% 00:13:43.470 cpu : usr=3.66%, sys=7.02%, ctx=315, majf=0, minf=15 00:13:43.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:43.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:43.470 issued rwts: total=3072,3390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:43.471 job1: (groupid=0, jobs=1): err= 0: pid=2787384: Wed May 15 10:53:59 2024 00:13:43.471 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:13:43.471 slat (usec): min=2, max=45668, avg=185.20, stdev=1434.39 00:13:43.471 clat (usec): min=7656, max=98007, avg=23553.76, stdev=17082.84 00:13:43.471 lat (usec): min=7838, max=98021, avg=23738.96, stdev=17219.20 00:13:43.471 clat percentiles (usec): 00:13:43.471 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10552], 00:13:43.471 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12780], 60.00th=[18744], 00:13:43.471 | 70.00th=[32637], 80.00th=[43254], 90.00th=[49021], 95.00th=[55837], 00:13:43.471 | 99.00th=[73925], 99.50th=[73925], 99.90th=[73925], 99.95th=[82314], 00:13:43.471 | 99.99th=[98042] 00:13:43.471 write: IOPS=3109, BW=12.1MiB/s (12.7MB/s)(12.3MiB/1011msec); 0 zone resets 00:13:43.471 slat (usec): min=3, max=31627, avg=130.53, stdev=1011.19 00:13:43.471 clat (usec): min=722, max=83179, avg=17778.52, stdev=11036.91 00:13:43.471 lat (usec): min=758, max=83184, avg=17909.06, stdev=11081.51 00:13:43.471 clat percentiles (usec): 00:13:43.471 | 1.00th=[ 7701], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[12125], 00:13:43.471 | 30.00th=[12780], 40.00th=[13566], 50.00th=[14091], 60.00th=[14615], 00:13:43.471 | 70.00th=[15533], 80.00th=[20579], 90.00th=[33162], 95.00th=[40633], 00:13:43.471 | 99.00th=[62129], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:13:43.471 | 99.99th=[83362] 00:13:43.471 bw ( KiB/s): min=12024, max=12552, per=24.64%, avg=12288.00, stdev=373.35, samples=2 00:13:43.471 iops : min= 3006, max= 3138, avg=3072.00, stdev=93.34, samples=2 00:13:43.471 lat (usec) : 750=0.02% 00:13:43.471 lat (msec) : 4=0.08%, 10=8.29%, 20=62.02%, 50=23.54%, 100=6.06% 00:13:43.471 cpu : usr=2.48%, sys=3.96%, ctx=280, majf=0, minf=7 00:13:43.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:43.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:43.471 issued rwts: total=3072,3144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:43.471 job2: (groupid=0, jobs=1): err= 0: pid=2787386: Wed May 15 10:53:59 2024 00:13:43.471 read: IOPS=2025, BW=8103KiB/s (8297kB/s)(8192KiB/1011msec) 00:13:43.471 slat (usec): min=2, max=18684, avg=178.97, stdev=1216.30 00:13:43.471 clat (usec): min=5421, max=52651, avg=22855.12, stdev=10855.78 00:13:43.471 lat (usec): min=5427, max=55986, avg=23034.09, stdev=10960.81 00:13:43.471 clat percentiles (usec): 00:13:43.471 | 1.00th=[ 6915], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[12649], 00:13:43.471 | 30.00th=[13304], 40.00th=[16712], 50.00th=[21890], 60.00th=[26084], 00:13:43.471 | 70.00th=[29754], 80.00th=[33162], 90.00th=[37487], 95.00th=[39060], 00:13:43.471 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:13:43.471 | 99.99th=[52691] 00:13:43.471 write: IOPS=2039, BW=8158KiB/s (8354kB/s)(8248KiB/1011msec); 0 zone resets 00:13:43.471 slat (usec): min=3, max=20534, avg=288.36, stdev=1369.88 00:13:43.471 clat (msec): min=4, max=145, avg=39.13, stdev=32.06 00:13:43.471 lat (msec): min=4, max=145, avg=39.42, stdev=32.23 00:13:43.471 clat percentiles (msec): 00:13:43.471 | 1.00th=[ 8], 5.00th=[ 10], 10.00th=[ 14], 20.00th=[ 21], 00:13:43.471 | 30.00th=[ 22], 40.00th=[ 24], 50.00th=[ 27], 60.00th=[ 28], 00:13:43.471 | 70.00th=[ 36], 80.00th=[ 61], 90.00th=[ 87], 95.00th=[ 113], 00:13:43.471 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:13:43.471 | 99.99th=[ 146] 00:13:43.471 bw ( KiB/s): min= 7696, max= 8688, per=16.42%, avg=8192.00, stdev=701.45, samples=2 00:13:43.471 iops : min= 1924, max= 2172, avg=2048.00, stdev=175.36, samples=2 00:13:43.471 lat (msec) : 10=5.94%, 20=26.16%, 50=54.72%, 100=9.15%, 250=4.04% 00:13:43.471 cpu : usr=2.87%, sys=4.95%, ctx=276, majf=0, minf=21 00:13:43.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:43.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:43.471 issued rwts: total=2048,2062,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:43.471 job3: (groupid=0, jobs=1): err= 0: pid=2787387: Wed May 15 10:53:59 2024 00:13:43.471 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:13:43.471 slat (usec): min=2, max=12836, avg=103.90, stdev=690.84 00:13:43.471 clat (usec): min=1619, max=37877, avg=14567.36, stdev=3793.48 00:13:43.471 lat (usec): min=1635, max=37880, avg=14671.26, stdev=3857.55 00:13:43.471 clat percentiles (usec): 00:13:43.471 | 1.00th=[ 7111], 5.00th=[10290], 10.00th=[12125], 20.00th=[12911], 00:13:43.471 | 30.00th=[13173], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:13:43.471 | 70.00th=[14746], 80.00th=[15139], 90.00th=[17695], 95.00th=[21365], 00:13:43.471 | 99.00th=[32637], 99.50th=[34866], 99.90th=[38011], 99.95th=[38011], 00:13:43.471 | 99.99th=[38011] 00:13:43.471 write: IOPS=4006, BW=15.7MiB/s (16.4MB/s)(15.8MiB/1007msec); 0 zone resets 00:13:43.471 slat (usec): min=3, max=12050, avg=133.79, stdev=578.50 00:13:43.471 clat (usec): min=721, max=38223, avg=18704.86, stdev=6520.54 00:13:43.471 lat (usec): min=735, max=38759, avg=18838.65, stdev=6570.06 00:13:43.471 clat percentiles (usec): 00:13:43.471 | 1.00th=[ 3392], 5.00th=[ 8717], 10.00th=[11338], 20.00th=[13698], 00:13:43.471 | 30.00th=[14877], 40.00th=[16057], 50.00th=[17695], 60.00th=[19792], 00:13:43.471 | 70.00th=[22152], 80.00th=[25560], 90.00th=[27395], 95.00th=[28967], 00:13:43.471 | 99.00th=[33424], 99.50th=[33817], 99.90th=[38011], 99.95th=[38011], 00:13:43.471 | 99.99th=[38011] 00:13:43.471 bw ( KiB/s): min=14880, max=16384, per=31.34%, avg=15632.00, stdev=1063.49, samples=2 00:13:43.471 iops : min= 3720, max= 4096, avg=3908.00, stdev=265.87, samples=2 00:13:43.471 lat (usec) : 750=0.03% 00:13:43.471 lat (msec) : 2=0.39%, 4=0.41%, 10=4.84%, 20=71.03%, 50=23.30% 00:13:43.471 cpu : usr=3.28%, sys=6.76%, ctx=683, majf=0, minf=7 00:13:43.471 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:43.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:43.471 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:43.471 issued rwts: total=3584,4035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:43.471 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:43.471 00:13:43.471 Run status group 0 (all jobs): 00:13:43.471 READ: bw=45.4MiB/s (47.6MB/s), 8103KiB/s-13.9MiB/s (8297kB/s-14.6MB/s), io=46.0MiB (48.2MB), run=1007-1013msec 00:13:43.471 WRITE: bw=48.7MiB/s (51.1MB/s), 8158KiB/s-15.7MiB/s (8354kB/s-16.4MB/s), io=49.3MiB (51.7MB), run=1007-1013msec 00:13:43.471 00:13:43.471 Disk stats (read/write): 00:13:43.471 nvme0n1: ios=2612/2575, merge=0/0, ticks=47659/54449, in_queue=102108, util=94.09% 00:13:43.471 nvme0n2: ios=2646/3072, merge=0/0, ticks=22671/25059, in_queue=47730, util=91.98% 00:13:43.471 nvme0n3: ios=1588/1767, merge=0/0, ticks=31375/66523, in_queue=97898, util=99.79% 00:13:43.471 nvme0n4: ios=3124/3453, merge=0/0, ticks=36886/48350, in_queue=85236, util=99.68% 00:13:43.471 10:53:59 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:13:43.471 [global] 00:13:43.471 thread=1 00:13:43.471 invalidate=1 00:13:43.471 rw=randwrite 00:13:43.471 time_based=1 00:13:43.471 runtime=1 00:13:43.471 ioengine=libaio 00:13:43.471 direct=1 00:13:43.471 bs=4096 00:13:43.471 iodepth=128 00:13:43.471 norandommap=0 00:13:43.471 numjobs=1 00:13:43.471 00:13:43.471 verify_dump=1 00:13:43.471 verify_backlog=512 00:13:43.471 verify_state_save=0 00:13:43.471 do_verify=1 00:13:43.471 verify=crc32c-intel 00:13:43.471 [job0] 00:13:43.471 filename=/dev/nvme0n1 00:13:43.471 [job1] 00:13:43.471 filename=/dev/nvme0n2 00:13:43.471 [job2] 00:13:43.471 filename=/dev/nvme0n3 00:13:43.471 [job3] 00:13:43.471 filename=/dev/nvme0n4 00:13:43.471 Could not set queue depth (nvme0n1) 00:13:43.471 Could not set queue depth (nvme0n2) 00:13:43.471 Could not set queue depth (nvme0n3) 00:13:43.471 Could not set queue depth (nvme0n4) 00:13:43.471 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:43.471 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:43.471 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:43.471 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:13:43.471 fio-3.35 00:13:43.471 Starting 4 threads 00:13:44.847 00:13:44.847 job0: (groupid=0, jobs=1): err= 0: pid=2787612: Wed May 15 10:54:00 2024 00:13:44.847 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:13:44.847 slat (usec): min=2, max=30252, avg=127.85, stdev=1090.45 00:13:44.847 clat (usec): min=1257, max=55938, avg=18520.55, stdev=8499.59 00:13:44.847 lat (usec): min=1274, max=55945, avg=18648.40, stdev=8559.67 00:13:44.847 clat percentiles (usec): 00:13:44.847 | 1.00th=[ 2376], 5.00th=[ 6456], 10.00th=[ 9765], 20.00th=[11338], 00:13:44.847 | 30.00th=[13173], 40.00th=[14222], 50.00th=[16450], 60.00th=[19268], 00:13:44.848 | 70.00th=[23725], 80.00th=[25560], 90.00th=[31589], 95.00th=[32900], 00:13:44.848 | 99.00th=[40109], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:13:44.848 | 99.99th=[55837] 00:13:44.848 write: IOPS=3424, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1005msec); 0 zone resets 00:13:44.848 slat (usec): min=3, max=56200, avg=156.96, stdev=1321.68 00:13:44.848 clat (usec): min=4731, max=93013, avg=20521.85, stdev=15525.40 00:13:44.848 lat (usec): min=5477, max=93023, avg=20678.80, stdev=15597.56 00:13:44.848 clat percentiles (usec): 00:13:44.848 | 1.00th=[ 8094], 5.00th=[10552], 10.00th=[11600], 20.00th=[13042], 00:13:44.848 | 30.00th=[13960], 40.00th=[14615], 50.00th=[15270], 60.00th=[16188], 00:13:44.848 | 70.00th=[17957], 80.00th=[23200], 90.00th=[32637], 95.00th=[57410], 00:13:44.848 | 99.00th=[89654], 99.50th=[89654], 99.90th=[92799], 99.95th=[92799], 00:13:44.848 | 99.99th=[92799] 00:13:44.848 bw ( KiB/s): min= 9832, max=16688, per=23.03%, avg=13260.00, stdev=4847.92, samples=2 00:13:44.848 iops : min= 2458, max= 4172, avg=3315.00, stdev=1211.98, samples=2 00:13:44.848 lat (msec) : 2=0.34%, 4=0.80%, 10=5.59%, 20=62.79%, 50=26.57% 00:13:44.848 lat (msec) : 100=3.91% 00:13:44.848 cpu : usr=3.39%, sys=4.18%, ctx=330, majf=0, minf=11 00:13:44.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:44.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.848 issued rwts: total=3072,3442,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.848 job1: (groupid=0, jobs=1): err= 0: pid=2787613: Wed May 15 10:54:00 2024 00:13:44.848 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:13:44.848 slat (usec): min=3, max=8501, avg=113.95, stdev=609.59 00:13:44.848 clat (usec): min=3731, max=31559, avg=14007.70, stdev=4821.90 00:13:44.848 lat (usec): min=3744, max=31569, avg=14121.65, stdev=4860.53 00:13:44.848 clat percentiles (usec): 00:13:44.848 | 1.00th=[ 6718], 5.00th=[ 8356], 10.00th=[ 9241], 20.00th=[ 9896], 00:13:44.848 | 30.00th=[10683], 40.00th=[11469], 50.00th=[12649], 60.00th=[14353], 00:13:44.848 | 70.00th=[15926], 80.00th=[17957], 90.00th=[21365], 95.00th=[23987], 00:13:44.848 | 99.00th=[26870], 99.50th=[27657], 99.90th=[29754], 99.95th=[31589], 00:13:44.848 | 99.99th=[31589] 00:13:44.848 write: IOPS=4099, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1005msec); 0 zone resets 00:13:44.848 slat (usec): min=4, max=26578, avg=120.00, stdev=642.46 00:13:44.848 clat (usec): min=2698, max=32217, avg=16317.14, stdev=6860.37 00:13:44.848 lat (usec): min=4001, max=32237, avg=16437.13, stdev=6897.14 00:13:44.848 clat percentiles (usec): 00:13:44.848 | 1.00th=[ 4752], 5.00th=[ 5932], 10.00th=[ 7046], 20.00th=[ 8979], 00:13:44.848 | 30.00th=[10683], 40.00th=[13173], 50.00th=[16450], 60.00th=[20579], 00:13:44.848 | 70.00th=[22152], 80.00th=[23462], 90.00th=[24773], 95.00th=[25560], 00:13:44.848 | 99.00th=[27395], 99.50th=[27919], 99.90th=[29754], 99.95th=[31065], 00:13:44.848 | 99.99th=[32113] 00:13:44.848 bw ( KiB/s): min=16384, max=16384, per=28.45%, avg=16384.00, stdev= 0.00, samples=2 00:13:44.848 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:13:44.848 lat (msec) : 4=0.17%, 10=22.59%, 20=49.60%, 50=27.64% 00:13:44.848 cpu : usr=4.68%, sys=8.27%, ctx=470, majf=0, minf=15 00:13:44.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:13:44.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.848 issued rwts: total=4096,4120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.848 job2: (groupid=0, jobs=1): err= 0: pid=2787614: Wed May 15 10:54:00 2024 00:13:44.848 read: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec) 00:13:44.848 slat (usec): min=2, max=48897, avg=269.73, stdev=2178.69 00:13:44.848 clat (msec): min=8, max=103, avg=33.12, stdev=21.02 00:13:44.848 lat (msec): min=8, max=103, avg=33.39, stdev=21.21 00:13:44.848 clat percentiles (msec): 00:13:44.848 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 15], 20.00th=[ 16], 00:13:44.848 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 24], 60.00th=[ 32], 00:13:44.848 | 70.00th=[ 42], 80.00th=[ 50], 90.00th=[ 71], 95.00th=[ 80], 00:13:44.848 | 99.00th=[ 88], 99.50th=[ 97], 99.90th=[ 97], 99.95th=[ 99], 00:13:44.848 | 99.99th=[ 104] 00:13:44.848 write: IOPS=2286, BW=9146KiB/s (9366kB/s)(9192KiB/1005msec); 0 zone resets 00:13:44.848 slat (usec): min=3, max=30040, avg=172.77, stdev=1287.12 00:13:44.848 clat (usec): min=438, max=96664, avg=25741.73, stdev=16487.47 00:13:44.848 lat (usec): min=1052, max=96669, avg=25914.49, stdev=16603.94 00:13:44.848 clat percentiles (usec): 00:13:44.848 | 1.00th=[ 4817], 5.00th=[ 6390], 10.00th=[10159], 20.00th=[13698], 00:13:44.848 | 30.00th=[15533], 40.00th=[18482], 50.00th=[20841], 60.00th=[22938], 00:13:44.848 | 70.00th=[30016], 80.00th=[36963], 90.00th=[54789], 95.00th=[66323], 00:13:44.848 | 99.00th=[68682], 99.50th=[68682], 99.90th=[87557], 99.95th=[95945], 00:13:44.848 | 99.99th=[96994] 00:13:44.848 bw ( KiB/s): min= 8680, max= 8688, per=15.08%, avg=8684.00, stdev= 5.66, samples=2 00:13:44.848 iops : min= 2170, max= 2172, avg=2171.00, stdev= 1.41, samples=2 00:13:44.848 lat (usec) : 500=0.02% 00:13:44.848 lat (msec) : 2=0.05%, 4=0.30%, 10=6.42%, 20=33.41%, 50=45.54% 00:13:44.848 lat (msec) : 100=14.24%, 250=0.02% 00:13:44.848 cpu : usr=1.59%, sys=2.59%, ctx=219, majf=0, minf=13 00:13:44.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:44.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.848 issued rwts: total=2048,2298,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.848 job3: (groupid=0, jobs=1): err= 0: pid=2787615: Wed May 15 10:54:00 2024 00:13:44.848 read: IOPS=4130, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1002msec) 00:13:44.848 slat (usec): min=2, max=15251, avg=122.40, stdev=900.38 00:13:44.848 clat (usec): min=703, max=38403, avg=15508.11, stdev=4812.46 00:13:44.848 lat (usec): min=2590, max=38418, avg=15630.51, stdev=4870.88 00:13:44.848 clat percentiles (usec): 00:13:44.848 | 1.00th=[ 8225], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11600], 00:13:44.848 | 30.00th=[12649], 40.00th=[13435], 50.00th=[14353], 60.00th=[15533], 00:13:44.848 | 70.00th=[17171], 80.00th=[19006], 90.00th=[20841], 95.00th=[25297], 00:13:44.848 | 99.00th=[32900], 99.50th=[35390], 99.90th=[38536], 99.95th=[38536], 00:13:44.848 | 99.99th=[38536] 00:13:44.848 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:13:44.848 slat (usec): min=4, max=10965, avg=97.76, stdev=601.29 00:13:44.848 clat (usec): min=2199, max=38404, avg=13651.70, stdev=5136.71 00:13:44.848 lat (usec): min=2212, max=38424, avg=13749.46, stdev=5162.47 00:13:44.848 clat percentiles (usec): 00:13:44.848 | 1.00th=[ 4178], 5.00th=[ 6915], 10.00th=[ 8356], 20.00th=[ 9241], 00:13:44.848 | 30.00th=[10552], 40.00th=[11731], 50.00th=[12649], 60.00th=[13435], 00:13:44.848 | 70.00th=[15139], 80.00th=[18220], 90.00th=[21627], 95.00th=[23987], 00:13:44.848 | 99.00th=[26346], 99.50th=[27657], 99.90th=[29754], 99.95th=[29754], 00:13:44.848 | 99.99th=[38536] 00:13:44.848 bw ( KiB/s): min=15712, max=20480, per=31.43%, avg=18096.00, stdev=3371.49, samples=2 00:13:44.848 iops : min= 3928, max= 5120, avg=4524.00, stdev=842.87, samples=2 00:13:44.848 lat (usec) : 750=0.01% 00:13:44.848 lat (msec) : 4=0.24%, 10=15.46%, 20=70.05%, 50=14.24% 00:13:44.848 cpu : usr=5.49%, sys=7.49%, ctx=338, majf=0, minf=11 00:13:44.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:44.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:13:44.848 issued rwts: total=4139,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:13:44.848 00:13:44.848 Run status group 0 (all jobs): 00:13:44.848 READ: bw=51.9MiB/s (54.4MB/s), 8151KiB/s-16.1MiB/s (8347kB/s-16.9MB/s), io=52.2MiB (54.7MB), run=1002-1005msec 00:13:44.848 WRITE: bw=56.2MiB/s (59.0MB/s), 9146KiB/s-18.0MiB/s (9366kB/s-18.8MB/s), io=56.5MiB (59.3MB), run=1002-1005msec 00:13:44.848 00:13:44.848 Disk stats (read/write): 00:13:44.848 nvme0n1: ios=2609/2649, merge=0/0, ticks=28523/40533, in_queue=69056, util=85.97% 00:13:44.848 nvme0n2: ios=3371/3584, merge=0/0, ticks=44132/55589, in_queue=99721, util=91.46% 00:13:44.848 nvme0n3: ios=1564/1983, merge=0/0, ticks=29333/26931, in_queue=56264, util=93.65% 00:13:44.848 nvme0n4: ios=3603/4071, merge=0/0, ticks=52526/51587, in_queue=104113, util=94.44% 00:13:44.848 10:54:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:13:44.848 10:54:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2787801 00:13:44.848 10:54:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:13:44.848 10:54:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:13:44.848 [global] 00:13:44.848 thread=1 00:13:44.848 invalidate=1 00:13:44.848 rw=read 00:13:44.848 time_based=1 00:13:44.848 runtime=10 00:13:44.848 ioengine=libaio 00:13:44.848 direct=1 00:13:44.848 bs=4096 00:13:44.848 iodepth=1 00:13:44.848 norandommap=1 00:13:44.848 numjobs=1 00:13:44.848 00:13:44.848 [job0] 00:13:44.848 filename=/dev/nvme0n1 00:13:44.848 [job1] 00:13:44.848 filename=/dev/nvme0n2 00:13:44.848 [job2] 00:13:44.848 filename=/dev/nvme0n3 00:13:44.848 [job3] 00:13:44.848 filename=/dev/nvme0n4 00:13:44.848 Could not set queue depth (nvme0n1) 00:13:44.848 Could not set queue depth (nvme0n2) 00:13:44.848 Could not set queue depth (nvme0n3) 00:13:44.848 Could not set queue depth (nvme0n4) 00:13:44.849 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.849 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.849 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.849 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:44.849 fio-3.35 00:13:44.849 Starting 4 threads 00:13:48.128 10:54:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:13:48.128 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=13619200, buflen=4096 00:13:48.128 fio: pid=2787909, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:48.128 10:54:03 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:13:48.128 10:54:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:48.128 10:54:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:13:48.128 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=757760, buflen=4096 00:13:48.128 fio: pid=2787908, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:48.385 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=29786112, buflen=4096 00:13:48.385 fio: pid=2787906, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:13:48.385 10:54:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:48.385 10:54:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:13:48.644 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=11870208, buflen=4096 00:13:48.644 fio: pid=2787907, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:13:48.644 10:54:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:48.644 10:54:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:13:48.644 00:13:48.644 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2787906: Wed May 15 10:54:04 2024 00:13:48.644 read: IOPS=2141, BW=8565KiB/s (8771kB/s)(28.4MiB/3396msec) 00:13:48.644 slat (usec): min=4, max=13607, avg=21.14, stdev=259.53 00:13:48.644 clat (usec): min=354, max=36968, avg=441.97, stdev=431.26 00:13:48.644 lat (usec): min=364, max=36979, avg=463.12, stdev=505.00 00:13:48.644 clat percentiles (usec): 00:13:48.644 | 1.00th=[ 367], 5.00th=[ 375], 10.00th=[ 379], 20.00th=[ 392], 00:13:48.644 | 30.00th=[ 404], 40.00th=[ 420], 50.00th=[ 433], 60.00th=[ 445], 00:13:48.644 | 70.00th=[ 457], 80.00th=[ 474], 90.00th=[ 494], 95.00th=[ 523], 00:13:48.644 | 99.00th=[ 594], 99.50th=[ 611], 99.90th=[ 668], 99.95th=[ 881], 00:13:48.644 | 99.99th=[36963] 00:13:48.644 bw ( KiB/s): min= 7584, max= 8888, per=56.67%, avg=8424.00, stdev=445.16, samples=6 00:13:48.644 iops : min= 1896, max= 2222, avg=2106.00, stdev=111.29, samples=6 00:13:48.644 lat (usec) : 500=91.19%, 750=8.72%, 1000=0.07% 00:13:48.644 lat (msec) : 50=0.01% 00:13:48.644 cpu : usr=1.56%, sys=4.15%, ctx=7279, majf=0, minf=1 00:13:48.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.644 issued rwts: total=7273,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.644 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=2787907: Wed May 15 10:54:04 2024 00:13:48.644 read: IOPS=787, BW=3149KiB/s (3225kB/s)(11.3MiB/3681msec) 00:13:48.644 slat (usec): min=5, max=20826, avg=23.68, stdev=400.81 00:13:48.644 clat (usec): min=391, max=43115, avg=1242.55, stdev=5126.61 00:13:48.644 lat (usec): min=404, max=63018, avg=1266.22, stdev=5214.97 00:13:48.644 clat percentiles (usec): 00:13:48.644 | 1.00th=[ 420], 5.00th=[ 461], 10.00th=[ 486], 20.00th=[ 510], 00:13:48.644 | 30.00th=[ 523], 40.00th=[ 529], 50.00th=[ 537], 60.00th=[ 553], 00:13:48.644 | 70.00th=[ 578], 80.00th=[ 635], 90.00th=[ 832], 95.00th=[ 938], 00:13:48.644 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42730], 00:13:48.644 | 99.99th=[43254] 00:13:48.644 bw ( KiB/s): min= 96, max= 6864, per=22.24%, avg=3306.86, stdev=2739.18, samples=7 00:13:48.644 iops : min= 24, max= 1716, avg=826.71, stdev=684.80, samples=7 00:13:48.644 lat (usec) : 500=13.35%, 750=73.51%, 1000=10.00% 00:13:48.644 lat (msec) : 2=1.41%, 4=0.07%, 50=1.62% 00:13:48.644 cpu : usr=0.49%, sys=1.77%, ctx=2900, majf=0, minf=1 00:13:48.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.644 issued rwts: total=2899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.644 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2787908: Wed May 15 10:54:04 2024 00:13:48.644 read: IOPS=58, BW=234KiB/s (240kB/s)(740KiB/3161msec) 00:13:48.644 slat (usec): min=7, max=12805, avg=87.14, stdev=937.59 00:13:48.644 clat (usec): min=455, max=42960, avg=16974.15, stdev=19924.55 00:13:48.644 lat (usec): min=470, max=53993, avg=17061.69, stdev=20032.55 00:13:48.644 clat percentiles (usec): 00:13:48.644 | 1.00th=[ 494], 5.00th=[ 537], 10.00th=[ 537], 20.00th=[ 562], 00:13:48.644 | 30.00th=[ 570], 40.00th=[ 619], 50.00th=[ 717], 60.00th=[21627], 00:13:48.644 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:13:48.644 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:13:48.644 | 99.99th=[42730] 00:13:48.644 bw ( KiB/s): min= 96, max= 856, per=1.62%, avg=241.33, stdev=303.39, samples=6 00:13:48.644 iops : min= 24, max= 214, avg=60.33, stdev=75.85, samples=6 00:13:48.644 lat (usec) : 500=1.08%, 750=54.30%, 1000=3.76% 00:13:48.644 lat (msec) : 50=40.32% 00:13:48.644 cpu : usr=0.03%, sys=0.16%, ctx=191, majf=0, minf=1 00:13:48.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.644 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.644 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.644 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2787909: Wed May 15 10:54:04 2024 00:13:48.644 read: IOPS=1155, BW=4621KiB/s (4732kB/s)(13.0MiB/2878msec) 00:13:48.644 slat (nsec): min=5820, max=66967, avg=16398.82, stdev=9786.51 00:13:48.644 clat (usec): min=387, max=43085, avg=844.70, stdev=3102.07 00:13:48.644 lat (usec): min=393, max=43095, avg=861.10, stdev=3102.70 00:13:48.644 clat percentiles (usec): 00:13:48.644 | 1.00th=[ 396], 5.00th=[ 408], 10.00th=[ 420], 20.00th=[ 441], 00:13:48.644 | 30.00th=[ 482], 40.00th=[ 506], 50.00th=[ 529], 60.00th=[ 578], 00:13:48.644 | 70.00th=[ 676], 80.00th=[ 832], 90.00th=[ 938], 95.00th=[ 988], 00:13:48.644 | 99.00th=[ 1156], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:13:48.644 | 99.99th=[43254] 00:13:48.644 bw ( KiB/s): min= 96, max= 6688, per=28.25%, avg=4200.00, stdev=2714.95, samples=5 00:13:48.644 iops : min= 24, max= 1672, avg=1050.00, stdev=678.74, samples=5 00:13:48.644 lat (usec) : 500=37.82%, 750=35.51%, 1000=22.67% 00:13:48.644 lat (msec) : 2=3.40%, 50=0.57% 00:13:48.644 cpu : usr=1.08%, sys=2.99%, ctx=3326, majf=0, minf=1 00:13:48.644 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:48.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.644 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:48.644 issued rwts: total=3326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:48.644 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:48.644 00:13:48.644 Run status group 0 (all jobs): 00:13:48.644 READ: bw=14.5MiB/s (15.2MB/s), 234KiB/s-8565KiB/s (240kB/s-8771kB/s), io=53.4MiB (56.0MB), run=2878-3681msec 00:13:48.644 00:13:48.644 Disk stats (read/write): 00:13:48.644 nvme0n1: ios=7188/0, merge=0/0, ticks=2990/0, in_queue=2990, util=94.99% 00:13:48.644 nvme0n2: ios=2896/0, merge=0/0, ticks=3431/0, in_queue=3431, util=95.82% 00:13:48.644 nvme0n3: ios=231/0, merge=0/0, ticks=4310/0, in_queue=4310, util=99.00% 00:13:48.644 nvme0n4: ios=3262/0, merge=0/0, ticks=2740/0, in_queue=2740, util=96.74% 00:13:48.902 10:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:48.902 10:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:13:49.160 10:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:49.160 10:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:49.418 10:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:49.418 10:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:49.676 10:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:49.676 10:54:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:49.934 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:49.934 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2787801 00:13:49.934 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:49.934 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:50.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1215 -- # local i=0 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # lsblk -o NAME,SERIAL 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # lsblk -l -o NAME,SERIAL 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1223 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # return 0 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:50.192 nvmf hotplug test: fio failed as expected 00:13:50.192 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:50.450 rmmod nvme_tcp 00:13:50.450 rmmod nvme_fabrics 00:13:50.450 rmmod nvme_keyring 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:50.450 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2785721 ']' 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2785721 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@946 -- # '[' -z 2785721 ']' 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@950 -- # kill -0 2785721 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # uname 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2785721 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2785721' 00:13:50.451 killing process with pid 2785721 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # kill 2785721 00:13:50.451 [2024-05-15 10:54:06.579275] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:50.451 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@970 -- # wait 2785721 00:13:50.710 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:50.710 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:50.710 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:50.710 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.710 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:50.710 10:54:06 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.710 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.710 10:54:06 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.268 10:54:08 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.268 00:13:53.268 real 0m24.355s 00:13:53.268 user 1m20.517s 00:13:53.268 sys 0m7.596s 00:13:53.268 10:54:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:53.268 10:54:08 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.268 ************************************ 00:13:53.268 END TEST nvmf_fio_target 00:13:53.268 ************************************ 00:13:53.268 10:54:08 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:53.268 10:54:08 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:13:53.268 10:54:08 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:53.268 10:54:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.268 ************************************ 00:13:53.268 START TEST nvmf_bdevio 00:13:53.268 ************************************ 00:13:53.268 10:54:08 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:53.268 * Looking for test storage... 00:13:53.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.268 10:54:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:55.174 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:55.174 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:55.174 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:55.174 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.174 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.433 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.433 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:13:55.433 00:13:55.433 --- 10.0.0.2 ping statistics --- 00:13:55.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.433 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.433 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.433 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:13:55.433 00:13:55.433 --- 10.0.0.1 ping statistics --- 00:13:55.433 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.433 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@720 -- # xtrace_disable 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2791497 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2791497 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@827 -- # '[' -z 2791497 ']' 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local max_retries=100 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # xtrace_disable 00:13:55.433 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.433 [2024-05-15 10:54:11.573509] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:55.433 [2024-05-15 10:54:11.573581] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.433 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.433 [2024-05-15 10:54:11.649247] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:55.691 [2024-05-15 10:54:11.758867] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:55.691 [2024-05-15 10:54:11.758936] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:55.691 [2024-05-15 10:54:11.758966] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:55.691 [2024-05-15 10:54:11.758987] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:55.691 [2024-05-15 10:54:11.758997] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:55.691 [2024-05-15 10:54:11.759084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:55.691 [2024-05-15 10:54:11.759116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:13:55.691 [2024-05-15 10:54:11.759183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:13:55.691 [2024-05-15 10:54:11.759186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@860 -- # return 0 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.691 [2024-05-15 10:54:11.913591] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.691 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.949 Malloc0 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:55.950 [2024-05-15 10:54:11.967000] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:55.950 [2024-05-15 10:54:11.967326] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:13:55.950 { 00:13:55.950 "params": { 00:13:55.950 "name": "Nvme$subsystem", 00:13:55.950 "trtype": "$TEST_TRANSPORT", 00:13:55.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:55.950 "adrfam": "ipv4", 00:13:55.950 "trsvcid": "$NVMF_PORT", 00:13:55.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:55.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:55.950 "hdgst": ${hdgst:-false}, 00:13:55.950 "ddgst": ${ddgst:-false} 00:13:55.950 }, 00:13:55.950 "method": "bdev_nvme_attach_controller" 00:13:55.950 } 00:13:55.950 EOF 00:13:55.950 )") 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:13:55.950 10:54:11 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:13:55.950 "params": { 00:13:55.950 "name": "Nvme1", 00:13:55.950 "trtype": "tcp", 00:13:55.950 "traddr": "10.0.0.2", 00:13:55.950 "adrfam": "ipv4", 00:13:55.950 "trsvcid": "4420", 00:13:55.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:55.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:55.950 "hdgst": false, 00:13:55.950 "ddgst": false 00:13:55.950 }, 00:13:55.950 "method": "bdev_nvme_attach_controller" 00:13:55.950 }' 00:13:55.950 [2024-05-15 10:54:12.013204] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:13:55.950 [2024-05-15 10:54:12.013293] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2791530 ] 00:13:55.950 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.950 [2024-05-15 10:54:12.083753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:56.209 [2024-05-15 10:54:12.200440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.209 [2024-05-15 10:54:12.200489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:56.209 [2024-05-15 10:54:12.200492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.209 I/O targets: 00:13:56.209 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:56.209 00:13:56.209 00:13:56.209 CUnit - A unit testing framework for C - Version 2.1-3 00:13:56.209 http://cunit.sourceforge.net/ 00:13:56.209 00:13:56.209 00:13:56.209 Suite: bdevio tests on: Nvme1n1 00:13:56.487 Test: blockdev write read block ...passed 00:13:56.487 Test: blockdev write zeroes read block ...passed 00:13:56.487 Test: blockdev write zeroes read no split ...passed 00:13:56.487 Test: blockdev write zeroes read split ...passed 00:13:56.487 Test: blockdev write zeroes read split partial ...passed 00:13:56.487 Test: blockdev reset ...[2024-05-15 10:54:12.637843] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:13:56.487 [2024-05-15 10:54:12.637955] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a49f0 (9): Bad file descriptor 00:13:56.487 [2024-05-15 10:54:12.649459] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:56.487 passed 00:13:56.487 Test: blockdev write read 8 blocks ...passed 00:13:56.487 Test: blockdev write read size > 128k ...passed 00:13:56.487 Test: blockdev write read invalid size ...passed 00:13:56.487 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:56.487 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:56.487 Test: blockdev write read max offset ...passed 00:13:56.745 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:56.745 Test: blockdev writev readv 8 blocks ...passed 00:13:56.745 Test: blockdev writev readv 30 x 1block ...passed 00:13:56.745 Test: blockdev writev readv block ...passed 00:13:56.745 Test: blockdev writev readv size > 128k ...passed 00:13:56.745 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:56.745 Test: blockdev comparev and writev ...[2024-05-15 10:54:12.866007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:56.745 [2024-05-15 10:54:12.866045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.866070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:56.745 [2024-05-15 10:54:12.866087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.866505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:56.745 [2024-05-15 10:54:12.866531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.866554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:56.745 [2024-05-15 10:54:12.866570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.866966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:56.745 [2024-05-15 10:54:12.867011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.867036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:56.745 [2024-05-15 10:54:12.867055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.867472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:56.745 [2024-05-15 10:54:12.867497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.867519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:56.745 [2024-05-15 10:54:12.867536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:56.745 passed 00:13:56.745 Test: blockdev nvme passthru rw ...passed 00:13:56.745 Test: blockdev nvme passthru vendor specific ...[2024-05-15 10:54:12.949361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:56.745 [2024-05-15 10:54:12.949389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.949633] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:56.745 [2024-05-15 10:54:12.949657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.949901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:56.745 [2024-05-15 10:54:12.949925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:56.745 [2024-05-15 10:54:12.950171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:56.746 [2024-05-15 10:54:12.950196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:56.746 passed 00:13:56.746 Test: blockdev nvme admin passthru ...passed 00:13:57.004 Test: blockdev copy ...passed 00:13:57.004 00:13:57.004 Run Summary: Type Total Ran Passed Failed Inactive 00:13:57.004 suites 1 1 n/a 0 0 00:13:57.004 tests 23 23 23 0 0 00:13:57.004 asserts 152 152 152 0 n/a 00:13:57.004 00:13:57.004 Elapsed time = 1.180 seconds 00:13:57.004 10:54:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:57.004 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:57.004 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:57.262 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:57.262 10:54:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:57.262 10:54:13 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:57.262 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:57.262 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:13:57.262 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.263 rmmod nvme_tcp 00:13:57.263 rmmod nvme_fabrics 00:13:57.263 rmmod nvme_keyring 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2791497 ']' 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2791497 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@946 -- # '[' -z 2791497 ']' 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@950 -- # kill -0 2791497 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # uname 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2791497 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2791497' 00:13:57.263 killing process with pid 2791497 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # kill 2791497 00:13:57.263 [2024-05-15 10:54:13.306109] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:57.263 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@970 -- # wait 2791497 00:13:57.522 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.522 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.522 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.522 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.522 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.522 10:54:13 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.522 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.522 10:54:13 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.519 10:54:15 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.519 00:13:59.519 real 0m6.688s 00:13:59.519 user 0m10.177s 00:13:59.519 sys 0m2.356s 00:13:59.519 10:54:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # xtrace_disable 00:13:59.519 10:54:15 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:59.519 ************************************ 00:13:59.519 END TEST nvmf_bdevio 00:13:59.519 ************************************ 00:13:59.519 10:54:15 nvmf_tcp -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:13:59.519 10:54:15 nvmf_tcp -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:59.519 10:54:15 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 4 -le 1 ']' 00:13:59.519 10:54:15 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:13:59.519 10:54:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.519 ************************************ 00:13:59.519 START TEST nvmf_bdevio_no_huge 00:13:59.519 ************************************ 00:13:59.519 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:13:59.519 * Looking for test storage... 00:13:59.779 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.779 10:54:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:02.312 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:02.312 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:02.312 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:02.312 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:02.312 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:02.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:02.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:14:02.313 00:14:02.313 --- 10.0.0.2 ping statistics --- 00:14:02.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.313 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:02.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:02.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:14:02.313 00:14:02.313 --- 10.0.0.1 ping statistics --- 00:14:02.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:02.313 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2794012 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2794012 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@827 -- # '[' -z 2794012 ']' 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:02.313 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.313 [2024-05-15 10:54:18.366946] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:02.313 [2024-05-15 10:54:18.367038] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:02.313 [2024-05-15 10:54:18.449907] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.571 [2024-05-15 10:54:18.573671] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.571 [2024-05-15 10:54:18.573730] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.571 [2024-05-15 10:54:18.573747] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.571 [2024-05-15 10:54:18.573761] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.571 [2024-05-15 10:54:18.573772] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.571 [2024-05-15 10:54:18.573884] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:02.571 [2024-05-15 10:54:18.573964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:02.571 [2024-05-15 10:54:18.574018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:02.571 [2024-05-15 10:54:18.574022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # return 0 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.571 [2024-05-15 10:54:18.707957] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:02.571 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.572 Malloc0 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:02.572 [2024-05-15 10:54:18.745856] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:02.572 [2024-05-15 10:54:18.746160] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:02.572 { 00:14:02.572 "params": { 00:14:02.572 "name": "Nvme$subsystem", 00:14:02.572 "trtype": "$TEST_TRANSPORT", 00:14:02.572 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:02.572 "adrfam": "ipv4", 00:14:02.572 "trsvcid": "$NVMF_PORT", 00:14:02.572 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:02.572 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:02.572 "hdgst": ${hdgst:-false}, 00:14:02.572 "ddgst": ${ddgst:-false} 00:14:02.572 }, 00:14:02.572 "method": "bdev_nvme_attach_controller" 00:14:02.572 } 00:14:02.572 EOF 00:14:02.572 )") 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:14:02.572 10:54:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:02.572 "params": { 00:14:02.572 "name": "Nvme1", 00:14:02.572 "trtype": "tcp", 00:14:02.572 "traddr": "10.0.0.2", 00:14:02.572 "adrfam": "ipv4", 00:14:02.572 "trsvcid": "4420", 00:14:02.572 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.572 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:02.572 "hdgst": false, 00:14:02.572 "ddgst": false 00:14:02.572 }, 00:14:02.572 "method": "bdev_nvme_attach_controller" 00:14:02.572 }' 00:14:02.572 [2024-05-15 10:54:18.791972] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:02.572 [2024-05-15 10:54:18.792058] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2794036 ] 00:14:02.830 [2024-05-15 10:54:18.865558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:02.830 [2024-05-15 10:54:18.984221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.830 [2024-05-15 10:54:18.984273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.830 [2024-05-15 10:54:18.984276] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:03.087 I/O targets: 00:14:03.087 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:03.087 00:14:03.087 00:14:03.087 CUnit - A unit testing framework for C - Version 2.1-3 00:14:03.087 http://cunit.sourceforge.net/ 00:14:03.087 00:14:03.087 00:14:03.087 Suite: bdevio tests on: Nvme1n1 00:14:03.087 Test: blockdev write read block ...passed 00:14:03.344 Test: blockdev write zeroes read block ...passed 00:14:03.344 Test: blockdev write zeroes read no split ...passed 00:14:03.344 Test: blockdev write zeroes read split ...passed 00:14:03.344 Test: blockdev write zeroes read split partial ...passed 00:14:03.345 Test: blockdev reset ...[2024-05-15 10:54:19.484485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:03.345 [2024-05-15 10:54:19.484605] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x60a340 (9): Bad file descriptor 00:14:03.345 [2024-05-15 10:54:19.500943] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:03.345 passed 00:14:03.345 Test: blockdev write read 8 blocks ...passed 00:14:03.345 Test: blockdev write read size > 128k ...passed 00:14:03.345 Test: blockdev write read invalid size ...passed 00:14:03.603 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:03.603 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:03.603 Test: blockdev write read max offset ...passed 00:14:03.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:03.603 Test: blockdev writev readv 8 blocks ...passed 00:14:03.603 Test: blockdev writev readv 30 x 1block ...passed 00:14:03.603 Test: blockdev writev readv block ...passed 00:14:03.603 Test: blockdev writev readv size > 128k ...passed 00:14:03.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:03.603 Test: blockdev comparev and writev ...[2024-05-15 10:54:19.800985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.603 [2024-05-15 10:54:19.801024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:03.603 [2024-05-15 10:54:19.801048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.603 [2024-05-15 10:54:19.801065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:03.603 [2024-05-15 10:54:19.801478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.603 [2024-05-15 10:54:19.801502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:03.603 [2024-05-15 10:54:19.801524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.603 [2024-05-15 10:54:19.801540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:03.603 [2024-05-15 10:54:19.801938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.603 [2024-05-15 10:54:19.801962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:03.603 [2024-05-15 10:54:19.801993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.603 [2024-05-15 10:54:19.802009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:03.603 [2024-05-15 10:54:19.802404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.603 [2024-05-15 10:54:19.802427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:03.603 [2024-05-15 10:54:19.802448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:03.603 [2024-05-15 10:54:19.802464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:03.861 passed 00:14:03.861 Test: blockdev nvme passthru rw ...passed 00:14:03.861 Test: blockdev nvme passthru vendor specific ...[2024-05-15 10:54:19.884366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.861 [2024-05-15 10:54:19.884398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:03.861 [2024-05-15 10:54:19.884631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.861 [2024-05-15 10:54:19.884655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:03.861 [2024-05-15 10:54:19.884892] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.861 [2024-05-15 10:54:19.884917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:03.861 [2024-05-15 10:54:19.885181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:03.861 [2024-05-15 10:54:19.885214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:03.861 passed 00:14:03.861 Test: blockdev nvme admin passthru ...passed 00:14:03.861 Test: blockdev copy ...passed 00:14:03.861 00:14:03.861 Run Summary: Type Total Ran Passed Failed Inactive 00:14:03.861 suites 1 1 n/a 0 0 00:14:03.861 tests 23 23 23 0 0 00:14:03.861 asserts 152 152 152 0 n/a 00:14:03.861 00:14:03.861 Elapsed time = 1.348 seconds 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:04.119 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:04.119 rmmod nvme_tcp 00:14:04.377 rmmod nvme_fabrics 00:14:04.377 rmmod nvme_keyring 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2794012 ']' 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2794012 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@946 -- # '[' -z 2794012 ']' 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # kill -0 2794012 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # uname 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2794012 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # process_name=reactor_3 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # '[' reactor_3 = sudo ']' 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2794012' 00:14:04.377 killing process with pid 2794012 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # kill 2794012 00:14:04.377 [2024-05-15 10:54:20.419886] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:04.377 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # wait 2794012 00:14:04.636 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:04.636 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:04.636 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:04.636 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:04.636 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:04.636 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.636 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:04.636 10:54:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.174 10:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:07.174 00:14:07.174 real 0m7.189s 00:14:07.174 user 0m12.100s 00:14:07.174 sys 0m2.865s 00:14:07.174 10:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # xtrace_disable 00:14:07.174 10:54:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:07.174 ************************************ 00:14:07.174 END TEST nvmf_bdevio_no_huge 00:14:07.174 ************************************ 00:14:07.174 10:54:22 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:07.174 10:54:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:14:07.174 10:54:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:14:07.174 10:54:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:07.174 ************************************ 00:14:07.174 START TEST nvmf_tls 00:14:07.174 ************************************ 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:07.174 * Looking for test storage... 00:14:07.174 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:07.174 10:54:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:14:07.174 10:54:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:09.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:09.706 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:09.706 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:09.706 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:09.706 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:09.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:09.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:14:09.707 00:14:09.707 --- 10.0.0.2 ping statistics --- 00:14:09.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.707 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:09.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:09.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:14:09.707 00:14:09.707 --- 10.0.0.1 ping statistics --- 00:14:09.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:09.707 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2796522 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2796522 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2796522 ']' 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:09.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.707 [2024-05-15 10:54:25.513249] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:09.707 [2024-05-15 10:54:25.513323] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:09.707 EAL: No free 2048 kB hugepages reported on node 1 00:14:09.707 [2024-05-15 10:54:25.588560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:09.707 [2024-05-15 10:54:25.693391] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:09.707 [2024-05-15 10:54:25.693465] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:09.707 [2024-05-15 10:54:25.693479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:09.707 [2024-05-15 10:54:25.693491] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:09.707 [2024-05-15 10:54:25.693500] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:09.707 [2024-05-15 10:54:25.693528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:14:09.707 10:54:25 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:09.965 true 00:14:09.965 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:09.965 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:14:10.224 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:14:10.224 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:14:10.224 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:10.509 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:10.509 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:14:10.767 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:14:10.767 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:14:10.767 10:54:26 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:11.025 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:11.025 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:14:11.283 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:14:11.283 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:14:11.283 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:11.283 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:14:11.542 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:14:11.542 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:14:11.542 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:11.799 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:11.799 10:54:27 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:14:12.057 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:14:12.057 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:14:12.057 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:12.316 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:12.316 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.sNwsB23NDe 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.EqoIvoBvjV 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.sNwsB23NDe 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.EqoIvoBvjV 00:14:12.574 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:12.832 10:54:28 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:14:13.399 10:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.sNwsB23NDe 00:14:13.399 10:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.sNwsB23NDe 00:14:13.399 10:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:13.658 [2024-05-15 10:54:29.649068] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:13.658 10:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:13.916 10:54:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:14.173 [2024-05-15 10:54:30.198528] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:14.173 [2024-05-15 10:54:30.198627] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:14.173 [2024-05-15 10:54:30.198826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:14.173 10:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:14.431 malloc0 00:14:14.431 10:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:14.689 10:54:30 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sNwsB23NDe 00:14:14.946 [2024-05-15 10:54:31.036787] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:14.946 10:54:31 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.sNwsB23NDe 00:14:14.946 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.135 Initializing NVMe Controllers 00:14:27.135 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.135 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:27.135 Initialization complete. Launching workers. 00:14:27.135 ======================================================== 00:14:27.135 Latency(us) 00:14:27.135 Device Information : IOPS MiB/s Average min max 00:14:27.135 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7581.07 29.61 8444.98 1398.31 9804.97 00:14:27.135 ======================================================== 00:14:27.135 Total : 7581.07 29.61 8444.98 1398.31 9804.97 00:14:27.135 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sNwsB23NDe 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sNwsB23NDe' 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2798434 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2798434 /var/tmp/bdevperf.sock 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2798434 ']' 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:27.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:27.135 [2024-05-15 10:54:41.212166] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:27.135 [2024-05-15 10:54:41.212261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2798434 ] 00:14:27.135 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.135 [2024-05-15 10:54:41.282075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.135 [2024-05-15 10:54:41.388468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sNwsB23NDe 00:14:27.135 [2024-05-15 10:54:41.772622] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:27.135 [2024-05-15 10:54:41.772745] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:27.135 TLSTESTn1 00:14:27.135 10:54:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:27.135 Running I/O for 10 seconds... 00:14:37.106 00:14:37.106 Latency(us) 00:14:37.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.106 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:37.106 Verification LBA range: start 0x0 length 0x2000 00:14:37.106 TLSTESTn1 : 10.10 1084.95 4.24 0.00 0.00 117508.09 5971.06 159228.21 00:14:37.106 =================================================================================================================== 00:14:37.106 Total : 1084.95 4.24 0.00 0.00 117508.09 5971.06 159228.21 00:14:37.106 0 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2798434 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2798434 ']' 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2798434 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2798434 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2798434' 00:14:37.106 killing process with pid 2798434 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2798434 00:14:37.106 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.106 00:14:37.106 Latency(us) 00:14:37.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.106 =================================================================================================================== 00:14:37.106 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:37.106 [2024-05-15 10:54:52.154771] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2798434 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EqoIvoBvjV 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EqoIvoBvjV 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EqoIvoBvjV 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.EqoIvoBvjV' 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2799753 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2799753 /var/tmp/bdevperf.sock 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2799753 ']' 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:37.106 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.107 [2024-05-15 10:54:52.452091] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:37.107 [2024-05-15 10:54:52.452171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799753 ] 00:14:37.107 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.107 [2024-05-15 10:54:52.518720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.107 [2024-05-15 10:54:52.623130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.107 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:37.107 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:37.107 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.EqoIvoBvjV 00:14:37.107 [2024-05-15 10:54:52.966741] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.107 [2024-05-15 10:54:52.966860] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:37.107 [2024-05-15 10:54:52.972249] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:37.107 [2024-05-15 10:54:52.972744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575130 (107): Transport endpoint is not connected 00:14:37.107 [2024-05-15 10:54:52.973734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1575130 (9): Bad file descriptor 00:14:37.107 [2024-05-15 10:54:52.974733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:37.107 [2024-05-15 10:54:52.974756] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:37.107 [2024-05-15 10:54:52.974775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:37.107 request: 00:14:37.107 { 00:14:37.107 "name": "TLSTEST", 00:14:37.107 "trtype": "tcp", 00:14:37.107 "traddr": "10.0.0.2", 00:14:37.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:37.107 "adrfam": "ipv4", 00:14:37.107 "trsvcid": "4420", 00:14:37.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.107 "psk": "/tmp/tmp.EqoIvoBvjV", 00:14:37.107 "method": "bdev_nvme_attach_controller", 00:14:37.107 "req_id": 1 00:14:37.107 } 00:14:37.107 Got JSON-RPC error response 00:14:37.107 response: 00:14:37.107 { 00:14:37.107 "code": -32602, 00:14:37.107 "message": "Invalid parameters" 00:14:37.107 } 00:14:37.107 10:54:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2799753 00:14:37.107 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2799753 ']' 00:14:37.107 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2799753 00:14:37.107 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:37.107 10:54:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2799753 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2799753' 00:14:37.107 killing process with pid 2799753 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2799753 00:14:37.107 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.107 00:14:37.107 Latency(us) 00:14:37.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.107 =================================================================================================================== 00:14:37.107 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:37.107 [2024-05-15 10:54:53.027302] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2799753 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sNwsB23NDe 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sNwsB23NDe 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.sNwsB23NDe 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sNwsB23NDe' 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2799779 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2799779 /var/tmp/bdevperf.sock 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2799779 ']' 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:37.107 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.107 [2024-05-15 10:54:53.331569] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:37.107 [2024-05-15 10:54:53.331665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799779 ] 00:14:37.365 EAL: No free 2048 kB hugepages reported on node 1 00:14:37.365 [2024-05-15 10:54:53.406485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.365 [2024-05-15 10:54:53.515266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.623 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:37.623 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:37.623 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.sNwsB23NDe 00:14:37.882 [2024-05-15 10:54:53.903044] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:37.882 [2024-05-15 10:54:53.903156] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:37.882 [2024-05-15 10:54:53.912309] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:37.882 [2024-05-15 10:54:53.912339] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:37.882 [2024-05-15 10:54:53.912391] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:37.882 [2024-05-15 10:54:53.913303] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe05130 (107): Transport endpoint is not connected 00:14:37.882 [2024-05-15 10:54:53.914293] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe05130 (9): Bad file descriptor 00:14:37.882 [2024-05-15 10:54:53.915293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:37.882 [2024-05-15 10:54:53.915315] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:37.882 [2024-05-15 10:54:53.915334] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:37.882 request: 00:14:37.882 { 00:14:37.882 "name": "TLSTEST", 00:14:37.882 "trtype": "tcp", 00:14:37.882 "traddr": "10.0.0.2", 00:14:37.882 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:37.882 "adrfam": "ipv4", 00:14:37.882 "trsvcid": "4420", 00:14:37.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:37.882 "psk": "/tmp/tmp.sNwsB23NDe", 00:14:37.882 "method": "bdev_nvme_attach_controller", 00:14:37.882 "req_id": 1 00:14:37.882 } 00:14:37.882 Got JSON-RPC error response 00:14:37.882 response: 00:14:37.882 { 00:14:37.882 "code": -32602, 00:14:37.882 "message": "Invalid parameters" 00:14:37.882 } 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2799779 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2799779 ']' 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2799779 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2799779 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2799779' 00:14:37.882 killing process with pid 2799779 00:14:37.882 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2799779 00:14:37.882 Received shutdown signal, test time was about 10.000000 seconds 00:14:37.882 00:14:37.882 Latency(us) 00:14:37.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:37.882 =================================================================================================================== 00:14:37.882 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:37.883 [2024-05-15 10:54:53.968234] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:37.883 10:54:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2799779 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sNwsB23NDe 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sNwsB23NDe 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.sNwsB23NDe 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sNwsB23NDe' 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2799910 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2799910 /var/tmp/bdevperf.sock 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2799910 ']' 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:38.141 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:38.141 [2024-05-15 10:54:54.264899] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:38.141 [2024-05-15 10:54:54.265005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799910 ] 00:14:38.141 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.141 [2024-05-15 10:54:54.336540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.399 [2024-05-15 10:54:54.441685] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.399 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:38.399 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:38.399 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sNwsB23NDe 00:14:38.658 [2024-05-15 10:54:54.802129] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.658 [2024-05-15 10:54:54.802264] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:38.659 [2024-05-15 10:54:54.807552] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:38.659 [2024-05-15 10:54:54.807585] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:38.659 [2024-05-15 10:54:54.807624] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:38.659 [2024-05-15 10:54:54.808181] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x696130 (107): Transport endpoint is not connected 00:14:38.659 [2024-05-15 10:54:54.809169] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x696130 (9): Bad file descriptor 00:14:38.659 [2024-05-15 10:54:54.810167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:38.659 [2024-05-15 10:54:54.810189] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:38.659 [2024-05-15 10:54:54.810209] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:38.659 request: 00:14:38.659 { 00:14:38.659 "name": "TLSTEST", 00:14:38.659 "trtype": "tcp", 00:14:38.659 "traddr": "10.0.0.2", 00:14:38.659 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:38.659 "adrfam": "ipv4", 00:14:38.659 "trsvcid": "4420", 00:14:38.659 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:38.659 "psk": "/tmp/tmp.sNwsB23NDe", 00:14:38.659 "method": "bdev_nvme_attach_controller", 00:14:38.659 "req_id": 1 00:14:38.659 } 00:14:38.659 Got JSON-RPC error response 00:14:38.659 response: 00:14:38.659 { 00:14:38.659 "code": -32602, 00:14:38.659 "message": "Invalid parameters" 00:14:38.659 } 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2799910 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2799910 ']' 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2799910 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2799910 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2799910' 00:14:38.659 killing process with pid 2799910 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2799910 00:14:38.659 Received shutdown signal, test time was about 10.000000 seconds 00:14:38.659 00:14:38.659 Latency(us) 00:14:38.659 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.659 =================================================================================================================== 00:14:38.659 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:38.659 [2024-05-15 10:54:54.863433] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:38.659 10:54:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2799910 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2800044 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2800044 /var/tmp/bdevperf.sock 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2800044 ']' 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:38.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:38.917 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:39.175 [2024-05-15 10:54:55.175280] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:39.175 [2024-05-15 10:54:55.175376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800044 ] 00:14:39.175 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.175 [2024-05-15 10:54:55.247011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:39.175 [2024-05-15 10:54:55.354388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.433 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:39.433 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:39.433 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:14:39.691 [2024-05-15 10:54:55.747155] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:39.691 [2024-05-15 10:54:55.749025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11c7ab0 (9): Bad file descriptor 00:14:39.691 [2024-05-15 10:54:55.750021] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:39.691 [2024-05-15 10:54:55.750043] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:14:39.691 [2024-05-15 10:54:55.750068] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:39.691 request: 00:14:39.691 { 00:14:39.691 "name": "TLSTEST", 00:14:39.691 "trtype": "tcp", 00:14:39.691 "traddr": "10.0.0.2", 00:14:39.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:39.691 "adrfam": "ipv4", 00:14:39.691 "trsvcid": "4420", 00:14:39.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:39.691 "method": "bdev_nvme_attach_controller", 00:14:39.691 "req_id": 1 00:14:39.691 } 00:14:39.691 Got JSON-RPC error response 00:14:39.691 response: 00:14:39.691 { 00:14:39.691 "code": -32602, 00:14:39.691 "message": "Invalid parameters" 00:14:39.691 } 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2800044 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2800044 ']' 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2800044 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2800044 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2800044' 00:14:39.691 killing process with pid 2800044 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2800044 00:14:39.691 Received shutdown signal, test time was about 10.000000 seconds 00:14:39.691 00:14:39.691 Latency(us) 00:14:39.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:39.691 =================================================================================================================== 00:14:39.691 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:39.691 10:54:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2800044 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2796522 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2796522 ']' 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2796522 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2796522 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:39.957 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2796522' 00:14:39.957 killing process with pid 2796522 00:14:39.958 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2796522 00:14:39.958 [2024-05-15 10:54:56.086118] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:39.958 [2024-05-15 10:54:56.086176] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:39.958 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2796522 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.egpszoN2Nk 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.egpszoN2Nk 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2800208 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2800208 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2800208 ']' 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:40.246 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.246 [2024-05-15 10:54:56.469874] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:40.246 [2024-05-15 10:54:56.469997] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.505 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.505 [2024-05-15 10:54:56.549152] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.505 [2024-05-15 10:54:56.665010] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.505 [2024-05-15 10:54:56.665068] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.505 [2024-05-15 10:54:56.665082] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.505 [2024-05-15 10:54:56.665093] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.505 [2024-05-15 10:54:56.665103] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.505 [2024-05-15 10:54:56.665136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.egpszoN2Nk 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.egpszoN2Nk 00:14:40.763 10:54:56 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:41.022 [2024-05-15 10:54:57.016251] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.022 10:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:41.280 10:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:41.280 [2024-05-15 10:54:57.493464] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:41.280 [2024-05-15 10:54:57.493556] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:41.280 [2024-05-15 10:54:57.493770] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.280 10:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:41.537 malloc0 00:14:41.537 10:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:41.795 10:54:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.egpszoN2Nk 00:14:42.054 [2024-05-15 10:54:58.214973] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.egpszoN2Nk 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.egpszoN2Nk' 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2800480 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2800480 /var/tmp/bdevperf.sock 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2800480 ']' 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:42.054 10:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:42.054 [2024-05-15 10:54:58.270840] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:42.054 [2024-05-15 10:54:58.270926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2800480 ] 00:14:42.312 EAL: No free 2048 kB hugepages reported on node 1 00:14:42.312 [2024-05-15 10:54:58.341568] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.312 [2024-05-15 10:54:58.454970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:42.570 10:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:42.570 10:54:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:42.570 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.egpszoN2Nk 00:14:42.570 [2024-05-15 10:54:58.784938] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:42.570 [2024-05-15 10:54:58.785084] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:14:42.828 TLSTESTn1 00:14:42.828 10:54:58 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:42.828 Running I/O for 10 seconds... 00:14:55.025 00:14:55.025 Latency(us) 00:14:55.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.025 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:55.025 Verification LBA range: start 0x0 length 0x2000 00:14:55.025 TLSTESTn1 : 10.09 1255.55 4.90 0.00 0.00 101590.98 10000.31 157674.76 00:14:55.025 =================================================================================================================== 00:14:55.025 Total : 1255.55 4.90 0.00 0.00 101590.98 10000.31 157674.76 00:14:55.025 0 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2800480 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2800480 ']' 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2800480 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2800480 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2800480' 00:14:55.025 killing process with pid 2800480 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2800480 00:14:55.025 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.025 00:14:55.025 Latency(us) 00:14:55.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.025 =================================================================================================================== 00:14:55.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:55.025 [2024-05-15 10:55:09.137210] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2800480 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.egpszoN2Nk 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.egpszoN2Nk 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.egpszoN2Nk 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.egpszoN2Nk 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.egpszoN2Nk' 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2801803 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2801803 /var/tmp/bdevperf.sock 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2801803 ']' 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.025 [2024-05-15 10:55:09.440174] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:55.025 [2024-05-15 10:55:09.440273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2801803 ] 00:14:55.025 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.025 [2024-05-15 10:55:09.510538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.025 [2024-05-15 10:55:09.613461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:55.025 10:55:09 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.egpszoN2Nk 00:14:55.025 [2024-05-15 10:55:09.997374] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:55.025 [2024-05-15 10:55:09.997453] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:55.025 [2024-05-15 10:55:09.997467] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.egpszoN2Nk 00:14:55.025 request: 00:14:55.025 { 00:14:55.025 "name": "TLSTEST", 00:14:55.025 "trtype": "tcp", 00:14:55.025 "traddr": "10.0.0.2", 00:14:55.025 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:55.025 "adrfam": "ipv4", 00:14:55.025 "trsvcid": "4420", 00:14:55.025 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:55.025 "psk": "/tmp/tmp.egpszoN2Nk", 00:14:55.025 "method": "bdev_nvme_attach_controller", 00:14:55.025 "req_id": 1 00:14:55.025 } 00:14:55.025 Got JSON-RPC error response 00:14:55.025 response: 00:14:55.025 { 00:14:55.025 "code": -1, 00:14:55.025 "message": "Operation not permitted" 00:14:55.025 } 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2801803 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2801803 ']' 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2801803 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2801803 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2801803' 00:14:55.025 killing process with pid 2801803 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2801803 00:14:55.025 Received shutdown signal, test time was about 10.000000 seconds 00:14:55.025 00:14:55.025 Latency(us) 00:14:55.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.025 =================================================================================================================== 00:14:55.025 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2801803 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2800208 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2800208 ']' 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2800208 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2800208 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2800208' 00:14:55.025 killing process with pid 2800208 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2800208 00:14:55.025 [2024-05-15 10:55:10.334547] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:55.025 [2024-05-15 10:55:10.334605] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:14:55.025 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2800208 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2801953 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2801953 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2801953 ']' 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.026 [2024-05-15 10:55:10.647603] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:55.026 [2024-05-15 10:55:10.647678] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.026 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.026 [2024-05-15 10:55:10.726683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.026 [2024-05-15 10:55:10.844727] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.026 [2024-05-15 10:55:10.844796] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.026 [2024-05-15 10:55:10.844812] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.026 [2024-05-15 10:55:10.844825] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.026 [2024-05-15 10:55:10.844837] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.026 [2024-05-15 10:55:10.844870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.egpszoN2Nk 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.egpszoN2Nk 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.egpszoN2Nk 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.egpszoN2Nk 00:14:55.026 10:55:10 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:55.284 [2024-05-15 10:55:11.264752] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:55.284 10:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:55.541 10:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:55.798 [2024-05-15 10:55:11.810204] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:55.799 [2024-05-15 10:55:11.810310] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:55.799 [2024-05-15 10:55:11.810531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:55.799 10:55:11 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:56.056 malloc0 00:14:56.056 10:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:56.313 10:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.egpszoN2Nk 00:14:56.313 [2024-05-15 10:55:12.539413] tcp.c:3572:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:14:56.313 [2024-05-15 10:55:12.539455] tcp.c:3658:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:14:56.313 [2024-05-15 10:55:12.539500] subsystem.c:1030:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:14:56.313 request: 00:14:56.313 { 00:14:56.313 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:56.313 "host": "nqn.2016-06.io.spdk:host1", 00:14:56.313 "psk": "/tmp/tmp.egpszoN2Nk", 00:14:56.313 "method": "nvmf_subsystem_add_host", 00:14:56.313 "req_id": 1 00:14:56.313 } 00:14:56.313 Got JSON-RPC error response 00:14:56.313 response: 00:14:56.313 { 00:14:56.313 "code": -32603, 00:14:56.313 "message": "Internal error" 00:14:56.313 } 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2801953 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2801953 ']' 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2801953 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2801953 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2801953' 00:14:56.570 killing process with pid 2801953 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2801953 00:14:56.570 [2024-05-15 10:55:12.590299] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:56.570 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2801953 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.egpszoN2Nk 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2802247 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2802247 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2802247 ']' 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:56.828 10:55:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:56.828 [2024-05-15 10:55:12.938359] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:56.828 [2024-05-15 10:55:12.938469] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.828 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.828 [2024-05-15 10:55:13.019856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.088 [2024-05-15 10:55:13.131863] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.089 [2024-05-15 10:55:13.131940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.089 [2024-05-15 10:55:13.131958] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.089 [2024-05-15 10:55:13.131972] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.089 [2024-05-15 10:55:13.131994] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.089 [2024-05-15 10:55:13.132025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.655 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:57.655 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:57.656 10:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:57.656 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:57.656 10:55:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:57.913 10:55:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.913 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.egpszoN2Nk 00:14:57.913 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.egpszoN2Nk 00:14:57.913 10:55:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:57.913 [2024-05-15 10:55:14.116589] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.913 10:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:58.478 10:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:14:58.478 [2024-05-15 10:55:14.690092] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:58.478 [2024-05-15 10:55:14.690174] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:58.478 [2024-05-15 10:55:14.690388] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:58.478 10:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:58.737 malloc0 00:14:58.995 10:55:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:58.995 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.egpszoN2Nk 00:14:59.253 [2024-05-15 10:55:15.427494] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2802539 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2802539 /var/tmp/bdevperf.sock 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2802539 ']' 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:59.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:14:59.253 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:59.512 [2024-05-15 10:55:15.487947] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:14:59.512 [2024-05-15 10:55:15.488038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802539 ] 00:14:59.512 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.512 [2024-05-15 10:55:15.556195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:59.512 [2024-05-15 10:55:15.660663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:59.769 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:14:59.769 10:55:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:14:59.769 10:55:15 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.egpszoN2Nk 00:15:00.026 [2024-05-15 10:55:16.009263] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:00.026 [2024-05-15 10:55:16.009368] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:00.026 TLSTESTn1 00:15:00.026 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:15:00.284 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:15:00.284 "subsystems": [ 00:15:00.284 { 00:15:00.284 "subsystem": "keyring", 00:15:00.284 "config": [] 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "subsystem": "iobuf", 00:15:00.284 "config": [ 00:15:00.284 { 00:15:00.284 "method": "iobuf_set_options", 00:15:00.284 "params": { 00:15:00.284 "small_pool_count": 8192, 00:15:00.284 "large_pool_count": 1024, 00:15:00.284 "small_bufsize": 8192, 00:15:00.284 "large_bufsize": 135168 00:15:00.284 } 00:15:00.284 } 00:15:00.284 ] 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "subsystem": "sock", 00:15:00.284 "config": [ 00:15:00.284 { 00:15:00.284 "method": "sock_set_default_impl", 00:15:00.284 "params": { 00:15:00.284 "impl_name": "posix" 00:15:00.284 } 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "method": "sock_impl_set_options", 00:15:00.284 "params": { 00:15:00.284 "impl_name": "ssl", 00:15:00.284 "recv_buf_size": 4096, 00:15:00.284 "send_buf_size": 4096, 00:15:00.284 "enable_recv_pipe": true, 00:15:00.284 "enable_quickack": false, 00:15:00.284 "enable_placement_id": 0, 00:15:00.284 "enable_zerocopy_send_server": true, 00:15:00.284 "enable_zerocopy_send_client": false, 00:15:00.284 "zerocopy_threshold": 0, 00:15:00.284 "tls_version": 0, 00:15:00.284 "enable_ktls": false 00:15:00.284 } 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "method": "sock_impl_set_options", 00:15:00.284 "params": { 00:15:00.284 "impl_name": "posix", 00:15:00.284 "recv_buf_size": 2097152, 00:15:00.284 "send_buf_size": 2097152, 00:15:00.284 "enable_recv_pipe": true, 00:15:00.284 "enable_quickack": false, 00:15:00.284 "enable_placement_id": 0, 00:15:00.284 "enable_zerocopy_send_server": true, 00:15:00.284 "enable_zerocopy_send_client": false, 00:15:00.284 "zerocopy_threshold": 0, 00:15:00.284 "tls_version": 0, 00:15:00.284 "enable_ktls": false 00:15:00.284 } 00:15:00.284 } 00:15:00.284 ] 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "subsystem": "vmd", 00:15:00.284 "config": [] 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "subsystem": "accel", 00:15:00.284 "config": [ 00:15:00.284 { 00:15:00.284 "method": "accel_set_options", 00:15:00.284 "params": { 00:15:00.284 "small_cache_size": 128, 00:15:00.284 "large_cache_size": 16, 00:15:00.284 "task_count": 2048, 00:15:00.284 "sequence_count": 2048, 00:15:00.284 "buf_count": 2048 00:15:00.284 } 00:15:00.284 } 00:15:00.284 ] 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "subsystem": "bdev", 00:15:00.284 "config": [ 00:15:00.284 { 00:15:00.284 "method": "bdev_set_options", 00:15:00.284 "params": { 00:15:00.284 "bdev_io_pool_size": 65535, 00:15:00.284 "bdev_io_cache_size": 256, 00:15:00.284 "bdev_auto_examine": true, 00:15:00.284 "iobuf_small_cache_size": 128, 00:15:00.284 "iobuf_large_cache_size": 16 00:15:00.284 } 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "method": "bdev_raid_set_options", 00:15:00.284 "params": { 00:15:00.284 "process_window_size_kb": 1024 00:15:00.284 } 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "method": "bdev_iscsi_set_options", 00:15:00.284 "params": { 00:15:00.284 "timeout_sec": 30 00:15:00.284 } 00:15:00.284 }, 00:15:00.284 { 00:15:00.284 "method": "bdev_nvme_set_options", 00:15:00.284 "params": { 00:15:00.284 "action_on_timeout": "none", 00:15:00.284 "timeout_us": 0, 00:15:00.284 "timeout_admin_us": 0, 00:15:00.284 "keep_alive_timeout_ms": 10000, 00:15:00.284 "arbitration_burst": 0, 00:15:00.284 "low_priority_weight": 0, 00:15:00.284 "medium_priority_weight": 0, 00:15:00.284 "high_priority_weight": 0, 00:15:00.284 "nvme_adminq_poll_period_us": 10000, 00:15:00.284 "nvme_ioq_poll_period_us": 0, 00:15:00.284 "io_queue_requests": 0, 00:15:00.284 "delay_cmd_submit": true, 00:15:00.284 "transport_retry_count": 4, 00:15:00.284 "bdev_retry_count": 3, 00:15:00.284 "transport_ack_timeout": 0, 00:15:00.284 "ctrlr_loss_timeout_sec": 0, 00:15:00.284 "reconnect_delay_sec": 0, 00:15:00.284 "fast_io_fail_timeout_sec": 0, 00:15:00.285 "disable_auto_failback": false, 00:15:00.285 "generate_uuids": false, 00:15:00.285 "transport_tos": 0, 00:15:00.285 "nvme_error_stat": false, 00:15:00.285 "rdma_srq_size": 0, 00:15:00.285 "io_path_stat": false, 00:15:00.285 "allow_accel_sequence": false, 00:15:00.285 "rdma_max_cq_size": 0, 00:15:00.285 "rdma_cm_event_timeout_ms": 0, 00:15:00.285 "dhchap_digests": [ 00:15:00.285 "sha256", 00:15:00.285 "sha384", 00:15:00.285 "sha512" 00:15:00.285 ], 00:15:00.285 "dhchap_dhgroups": [ 00:15:00.285 "null", 00:15:00.285 "ffdhe2048", 00:15:00.285 "ffdhe3072", 00:15:00.285 "ffdhe4096", 00:15:00.285 "ffdhe6144", 00:15:00.285 "ffdhe8192" 00:15:00.285 ] 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "bdev_nvme_set_hotplug", 00:15:00.285 "params": { 00:15:00.285 "period_us": 100000, 00:15:00.285 "enable": false 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "bdev_malloc_create", 00:15:00.285 "params": { 00:15:00.285 "name": "malloc0", 00:15:00.285 "num_blocks": 8192, 00:15:00.285 "block_size": 4096, 00:15:00.285 "physical_block_size": 4096, 00:15:00.285 "uuid": "dc209511-58c4-4a02-aa57-37bfdde073a5", 00:15:00.285 "optimal_io_boundary": 0 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "bdev_wait_for_examine" 00:15:00.285 } 00:15:00.285 ] 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "subsystem": "nbd", 00:15:00.285 "config": [] 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "subsystem": "scheduler", 00:15:00.285 "config": [ 00:15:00.285 { 00:15:00.285 "method": "framework_set_scheduler", 00:15:00.285 "params": { 00:15:00.285 "name": "static" 00:15:00.285 } 00:15:00.285 } 00:15:00.285 ] 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "subsystem": "nvmf", 00:15:00.285 "config": [ 00:15:00.285 { 00:15:00.285 "method": "nvmf_set_config", 00:15:00.285 "params": { 00:15:00.285 "discovery_filter": "match_any", 00:15:00.285 "admin_cmd_passthru": { 00:15:00.285 "identify_ctrlr": false 00:15:00.285 } 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "nvmf_set_max_subsystems", 00:15:00.285 "params": { 00:15:00.285 "max_subsystems": 1024 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "nvmf_set_crdt", 00:15:00.285 "params": { 00:15:00.285 "crdt1": 0, 00:15:00.285 "crdt2": 0, 00:15:00.285 "crdt3": 0 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "nvmf_create_transport", 00:15:00.285 "params": { 00:15:00.285 "trtype": "TCP", 00:15:00.285 "max_queue_depth": 128, 00:15:00.285 "max_io_qpairs_per_ctrlr": 127, 00:15:00.285 "in_capsule_data_size": 4096, 00:15:00.285 "max_io_size": 131072, 00:15:00.285 "io_unit_size": 131072, 00:15:00.285 "max_aq_depth": 128, 00:15:00.285 "num_shared_buffers": 511, 00:15:00.285 "buf_cache_size": 4294967295, 00:15:00.285 "dif_insert_or_strip": false, 00:15:00.285 "zcopy": false, 00:15:00.285 "c2h_success": false, 00:15:00.285 "sock_priority": 0, 00:15:00.285 "abort_timeout_sec": 1, 00:15:00.285 "ack_timeout": 0, 00:15:00.285 "data_wr_pool_size": 0 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "nvmf_create_subsystem", 00:15:00.285 "params": { 00:15:00.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.285 "allow_any_host": false, 00:15:00.285 "serial_number": "SPDK00000000000001", 00:15:00.285 "model_number": "SPDK bdev Controller", 00:15:00.285 "max_namespaces": 10, 00:15:00.285 "min_cntlid": 1, 00:15:00.285 "max_cntlid": 65519, 00:15:00.285 "ana_reporting": false 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "nvmf_subsystem_add_host", 00:15:00.285 "params": { 00:15:00.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.285 "host": "nqn.2016-06.io.spdk:host1", 00:15:00.285 "psk": "/tmp/tmp.egpszoN2Nk" 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "nvmf_subsystem_add_ns", 00:15:00.285 "params": { 00:15:00.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.285 "namespace": { 00:15:00.285 "nsid": 1, 00:15:00.285 "bdev_name": "malloc0", 00:15:00.285 "nguid": "DC20951158C44A02AA5737BFDDE073A5", 00:15:00.285 "uuid": "dc209511-58c4-4a02-aa57-37bfdde073a5", 00:15:00.285 "no_auto_visible": false 00:15:00.285 } 00:15:00.285 } 00:15:00.285 }, 00:15:00.285 { 00:15:00.285 "method": "nvmf_subsystem_add_listener", 00:15:00.285 "params": { 00:15:00.285 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.285 "listen_address": { 00:15:00.285 "trtype": "TCP", 00:15:00.285 "adrfam": "IPv4", 00:15:00.285 "traddr": "10.0.0.2", 00:15:00.285 "trsvcid": "4420" 00:15:00.285 }, 00:15:00.285 "secure_channel": true 00:15:00.285 } 00:15:00.285 } 00:15:00.285 ] 00:15:00.285 } 00:15:00.285 ] 00:15:00.285 }' 00:15:00.285 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:00.542 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:15:00.542 "subsystems": [ 00:15:00.542 { 00:15:00.542 "subsystem": "keyring", 00:15:00.542 "config": [] 00:15:00.542 }, 00:15:00.542 { 00:15:00.542 "subsystem": "iobuf", 00:15:00.542 "config": [ 00:15:00.542 { 00:15:00.542 "method": "iobuf_set_options", 00:15:00.542 "params": { 00:15:00.542 "small_pool_count": 8192, 00:15:00.542 "large_pool_count": 1024, 00:15:00.542 "small_bufsize": 8192, 00:15:00.542 "large_bufsize": 135168 00:15:00.542 } 00:15:00.542 } 00:15:00.542 ] 00:15:00.542 }, 00:15:00.542 { 00:15:00.542 "subsystem": "sock", 00:15:00.542 "config": [ 00:15:00.542 { 00:15:00.542 "method": "sock_set_default_impl", 00:15:00.542 "params": { 00:15:00.542 "impl_name": "posix" 00:15:00.542 } 00:15:00.542 }, 00:15:00.542 { 00:15:00.542 "method": "sock_impl_set_options", 00:15:00.542 "params": { 00:15:00.542 "impl_name": "ssl", 00:15:00.542 "recv_buf_size": 4096, 00:15:00.542 "send_buf_size": 4096, 00:15:00.542 "enable_recv_pipe": true, 00:15:00.542 "enable_quickack": false, 00:15:00.542 "enable_placement_id": 0, 00:15:00.542 "enable_zerocopy_send_server": true, 00:15:00.542 "enable_zerocopy_send_client": false, 00:15:00.542 "zerocopy_threshold": 0, 00:15:00.542 "tls_version": 0, 00:15:00.542 "enable_ktls": false 00:15:00.542 } 00:15:00.542 }, 00:15:00.542 { 00:15:00.542 "method": "sock_impl_set_options", 00:15:00.542 "params": { 00:15:00.542 "impl_name": "posix", 00:15:00.542 "recv_buf_size": 2097152, 00:15:00.542 "send_buf_size": 2097152, 00:15:00.542 "enable_recv_pipe": true, 00:15:00.542 "enable_quickack": false, 00:15:00.542 "enable_placement_id": 0, 00:15:00.542 "enable_zerocopy_send_server": true, 00:15:00.542 "enable_zerocopy_send_client": false, 00:15:00.542 "zerocopy_threshold": 0, 00:15:00.542 "tls_version": 0, 00:15:00.542 "enable_ktls": false 00:15:00.542 } 00:15:00.542 } 00:15:00.542 ] 00:15:00.542 }, 00:15:00.542 { 00:15:00.542 "subsystem": "vmd", 00:15:00.542 "config": [] 00:15:00.542 }, 00:15:00.542 { 00:15:00.542 "subsystem": "accel", 00:15:00.542 "config": [ 00:15:00.542 { 00:15:00.542 "method": "accel_set_options", 00:15:00.542 "params": { 00:15:00.542 "small_cache_size": 128, 00:15:00.542 "large_cache_size": 16, 00:15:00.542 "task_count": 2048, 00:15:00.542 "sequence_count": 2048, 00:15:00.542 "buf_count": 2048 00:15:00.542 } 00:15:00.542 } 00:15:00.542 ] 00:15:00.542 }, 00:15:00.542 { 00:15:00.542 "subsystem": "bdev", 00:15:00.542 "config": [ 00:15:00.542 { 00:15:00.542 "method": "bdev_set_options", 00:15:00.542 "params": { 00:15:00.542 "bdev_io_pool_size": 65535, 00:15:00.542 "bdev_io_cache_size": 256, 00:15:00.542 "bdev_auto_examine": true, 00:15:00.542 "iobuf_small_cache_size": 128, 00:15:00.542 "iobuf_large_cache_size": 16 00:15:00.542 } 00:15:00.543 }, 00:15:00.543 { 00:15:00.543 "method": "bdev_raid_set_options", 00:15:00.543 "params": { 00:15:00.543 "process_window_size_kb": 1024 00:15:00.543 } 00:15:00.543 }, 00:15:00.543 { 00:15:00.543 "method": "bdev_iscsi_set_options", 00:15:00.543 "params": { 00:15:00.543 "timeout_sec": 30 00:15:00.543 } 00:15:00.543 }, 00:15:00.543 { 00:15:00.543 "method": "bdev_nvme_set_options", 00:15:00.543 "params": { 00:15:00.543 "action_on_timeout": "none", 00:15:00.543 "timeout_us": 0, 00:15:00.543 "timeout_admin_us": 0, 00:15:00.543 "keep_alive_timeout_ms": 10000, 00:15:00.543 "arbitration_burst": 0, 00:15:00.543 "low_priority_weight": 0, 00:15:00.543 "medium_priority_weight": 0, 00:15:00.543 "high_priority_weight": 0, 00:15:00.543 "nvme_adminq_poll_period_us": 10000, 00:15:00.543 "nvme_ioq_poll_period_us": 0, 00:15:00.543 "io_queue_requests": 512, 00:15:00.543 "delay_cmd_submit": true, 00:15:00.543 "transport_retry_count": 4, 00:15:00.543 "bdev_retry_count": 3, 00:15:00.543 "transport_ack_timeout": 0, 00:15:00.543 "ctrlr_loss_timeout_sec": 0, 00:15:00.543 "reconnect_delay_sec": 0, 00:15:00.543 "fast_io_fail_timeout_sec": 0, 00:15:00.543 "disable_auto_failback": false, 00:15:00.543 "generate_uuids": false, 00:15:00.543 "transport_tos": 0, 00:15:00.543 "nvme_error_stat": false, 00:15:00.543 "rdma_srq_size": 0, 00:15:00.543 "io_path_stat": false, 00:15:00.543 "allow_accel_sequence": false, 00:15:00.543 "rdma_max_cq_size": 0, 00:15:00.543 "rdma_cm_event_timeout_ms": 0, 00:15:00.543 "dhchap_digests": [ 00:15:00.543 "sha256", 00:15:00.543 "sha384", 00:15:00.543 "sha512" 00:15:00.543 ], 00:15:00.543 "dhchap_dhgroups": [ 00:15:00.543 "null", 00:15:00.543 "ffdhe2048", 00:15:00.543 "ffdhe3072", 00:15:00.543 "ffdhe4096", 00:15:00.543 "ffdhe6144", 00:15:00.543 "ffdhe8192" 00:15:00.543 ] 00:15:00.543 } 00:15:00.543 }, 00:15:00.543 { 00:15:00.543 "method": "bdev_nvme_attach_controller", 00:15:00.543 "params": { 00:15:00.543 "name": "TLSTEST", 00:15:00.543 "trtype": "TCP", 00:15:00.543 "adrfam": "IPv4", 00:15:00.543 "traddr": "10.0.0.2", 00:15:00.543 "trsvcid": "4420", 00:15:00.543 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:00.543 "prchk_reftag": false, 00:15:00.543 "prchk_guard": false, 00:15:00.543 "ctrlr_loss_timeout_sec": 0, 00:15:00.543 "reconnect_delay_sec": 0, 00:15:00.543 "fast_io_fail_timeout_sec": 0, 00:15:00.543 "psk": "/tmp/tmp.egpszoN2Nk", 00:15:00.543 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:00.543 "hdgst": false, 00:15:00.543 "ddgst": false 00:15:00.543 } 00:15:00.543 }, 00:15:00.543 { 00:15:00.543 "method": "bdev_nvme_set_hotplug", 00:15:00.543 "params": { 00:15:00.543 "period_us": 100000, 00:15:00.543 "enable": false 00:15:00.543 } 00:15:00.543 }, 00:15:00.543 { 00:15:00.543 "method": "bdev_wait_for_examine" 00:15:00.543 } 00:15:00.543 ] 00:15:00.543 }, 00:15:00.543 { 00:15:00.543 "subsystem": "nbd", 00:15:00.543 "config": [] 00:15:00.543 } 00:15:00.543 ] 00:15:00.543 }' 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2802539 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2802539 ']' 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2802539 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2802539 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2802539' 00:15:00.543 killing process with pid 2802539 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2802539 00:15:00.543 Received shutdown signal, test time was about 10.000000 seconds 00:15:00.543 00:15:00.543 Latency(us) 00:15:00.543 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.543 =================================================================================================================== 00:15:00.543 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:00.543 [2024-05-15 10:55:16.751156] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:00.543 10:55:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2802539 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2802247 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2802247 ']' 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2802247 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2802247 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2802247' 00:15:00.860 killing process with pid 2802247 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2802247 00:15:00.860 [2024-05-15 10:55:17.087835] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:00.860 [2024-05-15 10:55:17.087881] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:00.860 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2802247 00:15:01.426 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:01.426 10:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:01.426 10:55:17 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:15:01.426 "subsystems": [ 00:15:01.426 { 00:15:01.426 "subsystem": "keyring", 00:15:01.426 "config": [] 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "subsystem": "iobuf", 00:15:01.426 "config": [ 00:15:01.426 { 00:15:01.426 "method": "iobuf_set_options", 00:15:01.426 "params": { 00:15:01.426 "small_pool_count": 8192, 00:15:01.426 "large_pool_count": 1024, 00:15:01.426 "small_bufsize": 8192, 00:15:01.426 "large_bufsize": 135168 00:15:01.426 } 00:15:01.426 } 00:15:01.426 ] 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "subsystem": "sock", 00:15:01.426 "config": [ 00:15:01.426 { 00:15:01.426 "method": "sock_set_default_impl", 00:15:01.426 "params": { 00:15:01.426 "impl_name": "posix" 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "sock_impl_set_options", 00:15:01.426 "params": { 00:15:01.426 "impl_name": "ssl", 00:15:01.426 "recv_buf_size": 4096, 00:15:01.426 "send_buf_size": 4096, 00:15:01.426 "enable_recv_pipe": true, 00:15:01.426 "enable_quickack": false, 00:15:01.426 "enable_placement_id": 0, 00:15:01.426 "enable_zerocopy_send_server": true, 00:15:01.426 "enable_zerocopy_send_client": false, 00:15:01.426 "zerocopy_threshold": 0, 00:15:01.426 "tls_version": 0, 00:15:01.426 "enable_ktls": false 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "sock_impl_set_options", 00:15:01.426 "params": { 00:15:01.426 "impl_name": "posix", 00:15:01.426 "recv_buf_size": 2097152, 00:15:01.426 "send_buf_size": 2097152, 00:15:01.426 "enable_recv_pipe": true, 00:15:01.426 "enable_quickack": false, 00:15:01.426 "enable_placement_id": 0, 00:15:01.426 "enable_zerocopy_send_server": true, 00:15:01.426 "enable_zerocopy_send_client": false, 00:15:01.426 "zerocopy_threshold": 0, 00:15:01.426 "tls_version": 0, 00:15:01.426 "enable_ktls": false 00:15:01.426 } 00:15:01.426 } 00:15:01.426 ] 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "subsystem": "vmd", 00:15:01.426 "config": [] 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "subsystem": "accel", 00:15:01.426 "config": [ 00:15:01.426 { 00:15:01.426 "method": "accel_set_options", 00:15:01.426 "params": { 00:15:01.426 "small_cache_size": 128, 00:15:01.426 "large_cache_size": 16, 00:15:01.426 "task_count": 2048, 00:15:01.426 "sequence_count": 2048, 00:15:01.426 "buf_count": 2048 00:15:01.426 } 00:15:01.426 } 00:15:01.426 ] 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "subsystem": "bdev", 00:15:01.426 "config": [ 00:15:01.426 { 00:15:01.426 "method": "bdev_set_options", 00:15:01.426 "params": { 00:15:01.426 "bdev_io_pool_size": 65535, 00:15:01.426 "bdev_io_cache_size": 256, 00:15:01.426 "bdev_auto_examine": true, 00:15:01.426 "iobuf_small_cache_size": 128, 00:15:01.426 "iobuf_large_cache_size": 16 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "bdev_raid_set_options", 00:15:01.426 "params": { 00:15:01.426 "process_window_size_kb": 1024 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "bdev_iscsi_set_options", 00:15:01.426 "params": { 00:15:01.426 "timeout_sec": 30 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "bdev_nvme_set_options", 00:15:01.426 "params": { 00:15:01.426 "action_on_timeout": "none", 00:15:01.426 "timeout_us": 0, 00:15:01.426 "timeout_admin_us": 0, 00:15:01.426 "keep_alive_timeout_ms": 10000, 00:15:01.426 "arbitration_burst": 0, 00:15:01.426 "low_priority_weight": 0, 00:15:01.426 "medium_priority_weight": 0, 00:15:01.426 "high_priority_weight": 0, 00:15:01.426 "nvme_adminq_poll_period_us": 10000, 00:15:01.426 "nvme_ioq_poll_period_us": 0, 00:15:01.426 "io_queue_requests": 0, 00:15:01.426 "delay_cmd_submit": true, 00:15:01.426 "transport_retry_count": 4, 00:15:01.426 "bdev_retry_count": 3, 00:15:01.426 "transport_ack_timeout": 0, 00:15:01.426 "ctrlr_loss_timeout_sec": 0, 00:15:01.426 "reconnect_delay_sec": 0, 00:15:01.426 "fast_io_fail_timeout_sec": 0, 00:15:01.426 "disable_auto_failback": false, 00:15:01.426 "generate_uuids": false, 00:15:01.426 "transport_tos": 0, 00:15:01.426 "nvme_error_stat": false, 00:15:01.426 "rdma_srq_size": 0, 00:15:01.426 "io_path_stat": false, 00:15:01.426 "allow_accel_sequence": false, 00:15:01.426 "rdma_max_cq_size": 0, 00:15:01.426 "rdma_cm_event_timeout_ms": 0, 00:15:01.426 "dhchap_digests": [ 00:15:01.426 "sha256", 00:15:01.426 "sha384", 00:15:01.426 "sha512" 00:15:01.426 ], 00:15:01.426 "dhchap_dhgroups": [ 00:15:01.426 "null", 00:15:01.426 "ffdhe2048", 00:15:01.426 "ffdhe3072", 00:15:01.426 "ffdhe4096", 00:15:01.426 "ffdhe6144", 00:15:01.426 "ffdhe8192" 00:15:01.426 ] 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "bdev_nvme_set_hotplug", 00:15:01.426 "params": { 00:15:01.426 "period_us": 100000, 00:15:01.426 "enable": false 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "bdev_malloc_create", 00:15:01.426 "params": { 00:15:01.426 "name": "malloc0", 00:15:01.426 "num_blocks": 8192, 00:15:01.426 "block_size": 4096, 00:15:01.426 "physical_block_size": 4096, 00:15:01.426 "uuid": "dc209511-58c4-4a02-aa57-37bfdde073a5", 00:15:01.426 "optimal_io_boundary": 0 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "bdev_wait_for_examine" 00:15:01.426 } 00:15:01.426 ] 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "subsystem": "nbd", 00:15:01.426 "config": [] 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "subsystem": "scheduler", 00:15:01.426 "config": [ 00:15:01.426 { 00:15:01.426 "method": "framework_set_scheduler", 00:15:01.426 "params": { 00:15:01.426 "name": "static" 00:15:01.426 } 00:15:01.426 } 00:15:01.426 ] 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "subsystem": "nvmf", 00:15:01.426 "config": [ 00:15:01.426 { 00:15:01.426 "method": "nvmf_set_config", 00:15:01.426 "params": { 00:15:01.426 "discovery_filter": "match_any", 00:15:01.426 "admin_cmd_passthru": { 00:15:01.426 "identify_ctrlr": false 00:15:01.426 } 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "nvmf_set_max_subsystems", 00:15:01.426 "params": { 00:15:01.426 "max_subsystems": 1024 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "nvmf_set_crdt", 00:15:01.426 "params": { 00:15:01.426 "crdt1": 0, 00:15:01.426 "crdt2": 0, 00:15:01.426 "crdt3": 0 00:15:01.426 } 00:15:01.426 }, 00:15:01.426 { 00:15:01.426 "method": "nvmf_create_transport", 00:15:01.426 "params": { 00:15:01.426 "trtype": "TCP", 00:15:01.426 "max_queue_depth": 128, 00:15:01.426 "max_io_qpairs_per_ctrlr": 127, 00:15:01.426 "in_capsule_data_size": 4096, 00:15:01.427 "max_io_size": 131072, 00:15:01.427 "io_unit_size": 131072, 00:15:01.427 "max_aq_depth": 128, 00:15:01.427 "num_shared_buffers": 511, 00:15:01.427 "buf_cache_size": 4294967295, 00:15:01.427 "dif_insert_or_strip": false, 00:15:01.427 "zcopy": false, 00:15:01.427 "c2h_success": false, 00:15:01.427 "sock_priority": 0, 00:15:01.427 "abort_timeout_sec": 1, 00:15:01.427 "ack_timeout": 0, 00:15:01.427 "data_wr_pool_size": 0 00:15:01.427 } 00:15:01.427 }, 00:15:01.427 { 00:15:01.427 "method": "nvmf_create_subsystem", 00:15:01.427 "params": { 00:15:01.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.427 "allow_any_host": false, 00:15:01.427 "serial_number": "SPDK00000000000001", 00:15:01.427 "model_number": "SPDK bdev Controller", 00:15:01.427 "max_namespaces": 10, 00:15:01.427 "min_cntlid": 1, 00:15:01.427 "max_cntlid": 65519, 00:15:01.427 "ana_reporting": false 00:15:01.427 } 00:15:01.427 }, 00:15:01.427 { 00:15:01.427 "method": "nvmf_subsystem_add_host", 00:15:01.427 "params": { 00:15:01.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.427 "host": "nqn.2016-06.io.spdk:host1", 00:15:01.427 "psk": "/tmp/tmp.egpszoN2Nk" 00:15:01.427 } 00:15:01.427 }, 00:15:01.427 { 00:15:01.427 "method": "nvmf_subsystem_add_ns", 00:15:01.427 "params": { 00:15:01.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.427 "namespace": { 00:15:01.427 "nsid": 1, 00:15:01.427 "bdev_name": "malloc0", 00:15:01.427 "nguid": "DC20951158C44A02AA5737BFDDE073A5", 00:15:01.427 "uuid": "dc209511-58c4-4a02-aa57-37bfdde073a5", 00:15:01.427 "no_auto_visible": false 00:15:01.427 } 00:15:01.427 } 00:15:01.427 }, 00:15:01.427 { 00:15:01.427 "method": "nvmf_subsystem_add_listener", 00:15:01.427 "params": { 00:15:01.427 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.427 "listen_address": { 00:15:01.427 "trtype": "TCP", 00:15:01.427 "adrfam": "IPv4", 00:15:01.427 "traddr": "10.0.0.2", 00:15:01.427 "trsvcid": "4420" 00:15:01.427 }, 00:15:01.427 "secure_channel": true 00:15:01.427 } 00:15:01.427 } 00:15:01.427 ] 00:15:01.427 } 00:15:01.427 ] 00:15:01.427 }' 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2802817 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2802817 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2802817 ']' 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:01.427 10:55:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:01.427 [2024-05-15 10:55:17.431793] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:01.427 [2024-05-15 10:55:17.431880] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:01.427 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.427 [2024-05-15 10:55:17.519973] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.427 [2024-05-15 10:55:17.638334] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:01.427 [2024-05-15 10:55:17.638407] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:01.427 [2024-05-15 10:55:17.638421] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:01.427 [2024-05-15 10:55:17.638432] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:01.427 [2024-05-15 10:55:17.638460] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:01.427 [2024-05-15 10:55:17.638551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:01.687 [2024-05-15 10:55:17.878868] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.687 [2024-05-15 10:55:17.894811] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:01.687 [2024-05-15 10:55:17.910825] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:01.687 [2024-05-15 10:55:17.910906] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:01.945 [2024-05-15 10:55:17.922137] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2802968 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2802968 /var/tmp/bdevperf.sock 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2802968 ']' 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:02.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:02.513 10:55:18 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:15:02.513 "subsystems": [ 00:15:02.513 { 00:15:02.513 "subsystem": "keyring", 00:15:02.513 "config": [] 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "subsystem": "iobuf", 00:15:02.513 "config": [ 00:15:02.513 { 00:15:02.513 "method": "iobuf_set_options", 00:15:02.513 "params": { 00:15:02.513 "small_pool_count": 8192, 00:15:02.513 "large_pool_count": 1024, 00:15:02.513 "small_bufsize": 8192, 00:15:02.513 "large_bufsize": 135168 00:15:02.513 } 00:15:02.513 } 00:15:02.513 ] 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "subsystem": "sock", 00:15:02.513 "config": [ 00:15:02.513 { 00:15:02.513 "method": "sock_set_default_impl", 00:15:02.513 "params": { 00:15:02.513 "impl_name": "posix" 00:15:02.513 } 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "method": "sock_impl_set_options", 00:15:02.513 "params": { 00:15:02.513 "impl_name": "ssl", 00:15:02.513 "recv_buf_size": 4096, 00:15:02.513 "send_buf_size": 4096, 00:15:02.513 "enable_recv_pipe": true, 00:15:02.513 "enable_quickack": false, 00:15:02.513 "enable_placement_id": 0, 00:15:02.513 "enable_zerocopy_send_server": true, 00:15:02.513 "enable_zerocopy_send_client": false, 00:15:02.513 "zerocopy_threshold": 0, 00:15:02.513 "tls_version": 0, 00:15:02.513 "enable_ktls": false 00:15:02.513 } 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "method": "sock_impl_set_options", 00:15:02.513 "params": { 00:15:02.513 "impl_name": "posix", 00:15:02.513 "recv_buf_size": 2097152, 00:15:02.513 "send_buf_size": 2097152, 00:15:02.513 "enable_recv_pipe": true, 00:15:02.513 "enable_quickack": false, 00:15:02.513 "enable_placement_id": 0, 00:15:02.513 "enable_zerocopy_send_server": true, 00:15:02.513 "enable_zerocopy_send_client": false, 00:15:02.513 "zerocopy_threshold": 0, 00:15:02.513 "tls_version": 0, 00:15:02.513 "enable_ktls": false 00:15:02.513 } 00:15:02.513 } 00:15:02.513 ] 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "subsystem": "vmd", 00:15:02.513 "config": [] 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "subsystem": "accel", 00:15:02.513 "config": [ 00:15:02.513 { 00:15:02.513 "method": "accel_set_options", 00:15:02.513 "params": { 00:15:02.513 "small_cache_size": 128, 00:15:02.513 "large_cache_size": 16, 00:15:02.513 "task_count": 2048, 00:15:02.513 "sequence_count": 2048, 00:15:02.513 "buf_count": 2048 00:15:02.513 } 00:15:02.513 } 00:15:02.513 ] 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "subsystem": "bdev", 00:15:02.513 "config": [ 00:15:02.513 { 00:15:02.513 "method": "bdev_set_options", 00:15:02.513 "params": { 00:15:02.513 "bdev_io_pool_size": 65535, 00:15:02.513 "bdev_io_cache_size": 256, 00:15:02.513 "bdev_auto_examine": true, 00:15:02.513 "iobuf_small_cache_size": 128, 00:15:02.513 "iobuf_large_cache_size": 16 00:15:02.513 } 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "method": "bdev_raid_set_options", 00:15:02.513 "params": { 00:15:02.513 "process_window_size_kb": 1024 00:15:02.513 } 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "method": "bdev_iscsi_set_options", 00:15:02.513 "params": { 00:15:02.513 "timeout_sec": 30 00:15:02.513 } 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "method": "bdev_nvme_set_options", 00:15:02.513 "params": { 00:15:02.513 "action_on_timeout": "none", 00:15:02.513 "timeout_us": 0, 00:15:02.513 "timeout_admin_us": 0, 00:15:02.513 "keep_alive_timeout_ms": 10000, 00:15:02.513 "arbitration_burst": 0, 00:15:02.513 "low_priority_weight": 0, 00:15:02.513 "medium_priority_weight": 0, 00:15:02.513 "high_priority_weight": 0, 00:15:02.513 "nvme_adminq_poll_period_us": 10000, 00:15:02.513 "nvme_ioq_poll_period_us": 0, 00:15:02.513 "io_queue_requests": 512, 00:15:02.513 "delay_cmd_submit": true, 00:15:02.513 "transport_retry_count": 4, 00:15:02.513 "bdev_retry_count": 3, 00:15:02.513 "transport_ack_timeout": 0, 00:15:02.513 "ctrlr_loss_timeout_sec": 0, 00:15:02.513 "reconnect_delay_sec": 0, 00:15:02.513 "fast_io_fail_timeout_sec": 0, 00:15:02.513 "disable_auto_failback": false, 00:15:02.513 "generate_uuids": false, 00:15:02.513 "transport_tos": 0, 00:15:02.513 "nvme_error_stat": false, 00:15:02.513 "rdma_srq_size": 0, 00:15:02.513 "io_path_stat": false, 00:15:02.513 "allow_accel_sequence": false, 00:15:02.513 "rdma_max_cq_size": 0, 00:15:02.513 "rdma_cm_event_timeout_ms": 0, 00:15:02.513 "dhchap_digests": [ 00:15:02.513 "sha256", 00:15:02.513 "sha384", 00:15:02.513 "sha512" 00:15:02.513 ], 00:15:02.513 "dhchap_dhgroups": [ 00:15:02.513 "null", 00:15:02.513 "ffdhe2048", 00:15:02.513 "ffdhe3072", 00:15:02.513 "ffdhe4096", 00:15:02.513 "ffdhe6144", 00:15:02.513 "ffdhe8192" 00:15:02.513 ] 00:15:02.513 } 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "method": "bdev_nvme_attach_controller", 00:15:02.513 "params": { 00:15:02.513 "name": "TLSTEST", 00:15:02.513 "trtype": "TCP", 00:15:02.513 "adrfam": "IPv4", 00:15:02.513 "traddr": "10.0.0.2", 00:15:02.513 "trsvcid": "4420", 00:15:02.513 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.513 "prchk_reftag": false, 00:15:02.513 "prchk_guard": false, 00:15:02.513 "ctrlr_loss_timeout_sec": 0, 00:15:02.513 "reconnect_delay_sec": 0, 00:15:02.513 "fast_io_fail_timeout_sec": 0, 00:15:02.513 "psk": "/tmp/tmp.egpszoN2Nk", 00:15:02.513 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:02.513 "hdgst": false, 00:15:02.513 "ddgst": false 00:15:02.513 } 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "method": "bdev_nvme_set_hotplug", 00:15:02.513 "params": { 00:15:02.513 "period_us": 100000, 00:15:02.513 "enable": false 00:15:02.513 } 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "method": "bdev_wait_for_examine" 00:15:02.513 } 00:15:02.513 ] 00:15:02.513 }, 00:15:02.513 { 00:15:02.513 "subsystem": "nbd", 00:15:02.513 "config": [] 00:15:02.513 } 00:15:02.513 ] 00:15:02.513 }' 00:15:02.513 [2024-05-15 10:55:18.505805] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:02.513 [2024-05-15 10:55:18.505885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2802968 ] 00:15:02.513 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.514 [2024-05-15 10:55:18.574411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.514 [2024-05-15 10:55:18.683202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:02.821 [2024-05-15 10:55:18.854935] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:02.821 [2024-05-15 10:55:18.855086] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:03.444 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:03.444 10:55:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:03.444 10:55:19 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:03.444 Running I/O for 10 seconds... 00:15:15.639 00:15:15.639 Latency(us) 00:15:15.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.639 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:15.639 Verification LBA range: start 0x0 length 0x2000 00:15:15.639 TLSTESTn1 : 10.12 881.61 3.44 0.00 0.00 144512.67 8689.59 215928.98 00:15:15.639 =================================================================================================================== 00:15:15.639 Total : 881.61 3.44 0.00 0.00 144512.67 8689.59 215928.98 00:15:15.639 0 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2802968 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2802968 ']' 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2802968 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2802968 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2802968' 00:15:15.639 killing process with pid 2802968 00:15:15.639 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2802968 00:15:15.639 Received shutdown signal, test time was about 10.000000 seconds 00:15:15.640 00:15:15.640 Latency(us) 00:15:15.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:15.640 =================================================================================================================== 00:15:15.640 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:15.640 [2024-05-15 10:55:29.777801] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:15.640 10:55:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2802968 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2802817 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2802817 ']' 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2802817 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2802817 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2802817' 00:15:15.640 killing process with pid 2802817 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2802817 00:15:15.640 [2024-05-15 10:55:30.081300] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:15.640 [2024-05-15 10:55:30.081354] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2802817 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2804304 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2804304 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2804304 ']' 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.640 [2024-05-15 10:55:30.431832] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:15.640 [2024-05-15 10:55:30.431911] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.640 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.640 [2024-05-15 10:55:30.508669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.640 [2024-05-15 10:55:30.620466] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.640 [2024-05-15 10:55:30.620535] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.640 [2024-05-15 10:55:30.620549] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.640 [2024-05-15 10:55:30.620560] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.640 [2024-05-15 10:55:30.620569] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.640 [2024-05-15 10:55:30.620595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.egpszoN2Nk 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.egpszoN2Nk 00:15:15.640 10:55:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:15.640 [2024-05-15 10:55:30.986963] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.640 10:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:15.640 10:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:15:15.640 [2024-05-15 10:55:31.460193] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:15.640 [2024-05-15 10:55:31.460293] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:15.640 [2024-05-15 10:55:31.460501] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.640 10:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:15.640 malloc0 00:15:15.640 10:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:15.898 10:55:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.egpszoN2Nk 00:15:16.156 [2024-05-15 10:55:32.193826] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2804583 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2804583 /var/tmp/bdevperf.sock 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2804583 ']' 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:16.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:16.156 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:16.156 [2024-05-15 10:55:32.253999] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:16.156 [2024-05-15 10:55:32.254078] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2804583 ] 00:15:16.156 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.156 [2024-05-15 10:55:32.325731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.414 [2024-05-15 10:55:32.434623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.414 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:16.414 10:55:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:16.414 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.egpszoN2Nk 00:15:16.672 10:55:32 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:16.930 [2024-05-15 10:55:33.007812] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:16.930 nvme0n1 00:15:16.930 10:55:33 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:17.187 Running I/O for 1 seconds... 00:15:18.119 00:15:18.119 Latency(us) 00:15:18.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.119 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:18.119 Verification LBA range: start 0x0 length 0x2000 00:15:18.119 nvme0n1 : 1.08 1018.45 3.98 0.00 0.00 122011.31 6456.51 165441.99 00:15:18.119 =================================================================================================================== 00:15:18.119 Total : 1018.45 3.98 0.00 0.00 122011.31 6456.51 165441.99 00:15:18.119 0 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2804583 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2804583 ']' 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2804583 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2804583 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2804583' 00:15:18.119 killing process with pid 2804583 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2804583 00:15:18.119 Received shutdown signal, test time was about 1.000000 seconds 00:15:18.119 00:15:18.119 Latency(us) 00:15:18.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:18.119 =================================================================================================================== 00:15:18.119 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:18.119 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2804583 00:15:18.378 10:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2804304 00:15:18.378 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2804304 ']' 00:15:18.378 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2804304 00:15:18.378 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:18.378 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:18.378 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2804304 00:15:18.636 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:18.636 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:18.636 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2804304' 00:15:18.636 killing process with pid 2804304 00:15:18.636 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2804304 00:15:18.636 [2024-05-15 10:55:34.629238] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:18.636 [2024-05-15 10:55:34.629296] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:18.636 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2804304 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2804872 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2804872 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2804872 ']' 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:18.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:18.894 10:55:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:18.894 [2024-05-15 10:55:34.971573] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:18.894 [2024-05-15 10:55:34.971674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:18.894 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.894 [2024-05-15 10:55:35.051411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:19.153 [2024-05-15 10:55:35.166259] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.153 [2024-05-15 10:55:35.166325] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.153 [2024-05-15 10:55:35.166341] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.153 [2024-05-15 10:55:35.166355] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.153 [2024-05-15 10:55:35.166367] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.153 [2024-05-15 10:55:35.166409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.719 10:55:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.719 [2024-05-15 10:55:35.946384] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.978 malloc0 00:15:19.978 [2024-05-15 10:55:35.979258] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:19.978 [2024-05-15 10:55:35.979371] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:19.978 [2024-05-15 10:55:35.979603] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2805025 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2805025 /var/tmp/bdevperf.sock 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2805025 ']' 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:19.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:19.978 10:55:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:19.978 [2024-05-15 10:55:36.050095] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:19.978 [2024-05-15 10:55:36.050181] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805025 ] 00:15:19.978 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.978 [2024-05-15 10:55:36.122862] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:20.236 [2024-05-15 10:55:36.239434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.802 10:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:20.802 10:55:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:20.802 10:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.egpszoN2Nk 00:15:21.060 10:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:21.318 [2024-05-15 10:55:37.490210] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:21.576 nvme0n1 00:15:21.576 10:55:37 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:21.576 Running I/O for 1 seconds... 00:15:22.947 00:15:22.947 Latency(us) 00:15:22.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:22.947 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:22.947 Verification LBA range: start 0x0 length 0x2000 00:15:22.947 nvme0n1 : 1.10 1019.45 3.98 0.00 0.00 121176.53 9466.31 160004.93 00:15:22.947 =================================================================================================================== 00:15:22.947 Total : 1019.45 3.98 0.00 0.00 121176.53 9466.31 160004.93 00:15:22.947 0 00:15:22.947 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:15:22.947 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.947 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:22.948 10:55:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.948 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:15:22.948 "subsystems": [ 00:15:22.948 { 00:15:22.948 "subsystem": "keyring", 00:15:22.948 "config": [ 00:15:22.948 { 00:15:22.948 "method": "keyring_file_add_key", 00:15:22.948 "params": { 00:15:22.948 "name": "key0", 00:15:22.948 "path": "/tmp/tmp.egpszoN2Nk" 00:15:22.948 } 00:15:22.948 } 00:15:22.948 ] 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "subsystem": "iobuf", 00:15:22.948 "config": [ 00:15:22.948 { 00:15:22.948 "method": "iobuf_set_options", 00:15:22.948 "params": { 00:15:22.948 "small_pool_count": 8192, 00:15:22.948 "large_pool_count": 1024, 00:15:22.948 "small_bufsize": 8192, 00:15:22.948 "large_bufsize": 135168 00:15:22.948 } 00:15:22.948 } 00:15:22.948 ] 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "subsystem": "sock", 00:15:22.948 "config": [ 00:15:22.948 { 00:15:22.948 "method": "sock_set_default_impl", 00:15:22.948 "params": { 00:15:22.948 "impl_name": "posix" 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "sock_impl_set_options", 00:15:22.948 "params": { 00:15:22.948 "impl_name": "ssl", 00:15:22.948 "recv_buf_size": 4096, 00:15:22.948 "send_buf_size": 4096, 00:15:22.948 "enable_recv_pipe": true, 00:15:22.948 "enable_quickack": false, 00:15:22.948 "enable_placement_id": 0, 00:15:22.948 "enable_zerocopy_send_server": true, 00:15:22.948 "enable_zerocopy_send_client": false, 00:15:22.948 "zerocopy_threshold": 0, 00:15:22.948 "tls_version": 0, 00:15:22.948 "enable_ktls": false 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "sock_impl_set_options", 00:15:22.948 "params": { 00:15:22.948 "impl_name": "posix", 00:15:22.948 "recv_buf_size": 2097152, 00:15:22.948 "send_buf_size": 2097152, 00:15:22.948 "enable_recv_pipe": true, 00:15:22.948 "enable_quickack": false, 00:15:22.948 "enable_placement_id": 0, 00:15:22.948 "enable_zerocopy_send_server": true, 00:15:22.948 "enable_zerocopy_send_client": false, 00:15:22.948 "zerocopy_threshold": 0, 00:15:22.948 "tls_version": 0, 00:15:22.948 "enable_ktls": false 00:15:22.948 } 00:15:22.948 } 00:15:22.948 ] 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "subsystem": "vmd", 00:15:22.948 "config": [] 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "subsystem": "accel", 00:15:22.948 "config": [ 00:15:22.948 { 00:15:22.948 "method": "accel_set_options", 00:15:22.948 "params": { 00:15:22.948 "small_cache_size": 128, 00:15:22.948 "large_cache_size": 16, 00:15:22.948 "task_count": 2048, 00:15:22.948 "sequence_count": 2048, 00:15:22.948 "buf_count": 2048 00:15:22.948 } 00:15:22.948 } 00:15:22.948 ] 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "subsystem": "bdev", 00:15:22.948 "config": [ 00:15:22.948 { 00:15:22.948 "method": "bdev_set_options", 00:15:22.948 "params": { 00:15:22.948 "bdev_io_pool_size": 65535, 00:15:22.948 "bdev_io_cache_size": 256, 00:15:22.948 "bdev_auto_examine": true, 00:15:22.948 "iobuf_small_cache_size": 128, 00:15:22.948 "iobuf_large_cache_size": 16 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "bdev_raid_set_options", 00:15:22.948 "params": { 00:15:22.948 "process_window_size_kb": 1024 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "bdev_iscsi_set_options", 00:15:22.948 "params": { 00:15:22.948 "timeout_sec": 30 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "bdev_nvme_set_options", 00:15:22.948 "params": { 00:15:22.948 "action_on_timeout": "none", 00:15:22.948 "timeout_us": 0, 00:15:22.948 "timeout_admin_us": 0, 00:15:22.948 "keep_alive_timeout_ms": 10000, 00:15:22.948 "arbitration_burst": 0, 00:15:22.948 "low_priority_weight": 0, 00:15:22.948 "medium_priority_weight": 0, 00:15:22.948 "high_priority_weight": 0, 00:15:22.948 "nvme_adminq_poll_period_us": 10000, 00:15:22.948 "nvme_ioq_poll_period_us": 0, 00:15:22.948 "io_queue_requests": 0, 00:15:22.948 "delay_cmd_submit": true, 00:15:22.948 "transport_retry_count": 4, 00:15:22.948 "bdev_retry_count": 3, 00:15:22.948 "transport_ack_timeout": 0, 00:15:22.948 "ctrlr_loss_timeout_sec": 0, 00:15:22.948 "reconnect_delay_sec": 0, 00:15:22.948 "fast_io_fail_timeout_sec": 0, 00:15:22.948 "disable_auto_failback": false, 00:15:22.948 "generate_uuids": false, 00:15:22.948 "transport_tos": 0, 00:15:22.948 "nvme_error_stat": false, 00:15:22.948 "rdma_srq_size": 0, 00:15:22.948 "io_path_stat": false, 00:15:22.948 "allow_accel_sequence": false, 00:15:22.948 "rdma_max_cq_size": 0, 00:15:22.948 "rdma_cm_event_timeout_ms": 0, 00:15:22.948 "dhchap_digests": [ 00:15:22.948 "sha256", 00:15:22.948 "sha384", 00:15:22.948 "sha512" 00:15:22.948 ], 00:15:22.948 "dhchap_dhgroups": [ 00:15:22.948 "null", 00:15:22.948 "ffdhe2048", 00:15:22.948 "ffdhe3072", 00:15:22.948 "ffdhe4096", 00:15:22.948 "ffdhe6144", 00:15:22.948 "ffdhe8192" 00:15:22.948 ] 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "bdev_nvme_set_hotplug", 00:15:22.948 "params": { 00:15:22.948 "period_us": 100000, 00:15:22.948 "enable": false 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "bdev_malloc_create", 00:15:22.948 "params": { 00:15:22.948 "name": "malloc0", 00:15:22.948 "num_blocks": 8192, 00:15:22.948 "block_size": 4096, 00:15:22.948 "physical_block_size": 4096, 00:15:22.948 "uuid": "00e08d78-84ca-4898-9447-20478ffe7b56", 00:15:22.948 "optimal_io_boundary": 0 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "bdev_wait_for_examine" 00:15:22.948 } 00:15:22.948 ] 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "subsystem": "nbd", 00:15:22.948 "config": [] 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "subsystem": "scheduler", 00:15:22.948 "config": [ 00:15:22.948 { 00:15:22.948 "method": "framework_set_scheduler", 00:15:22.948 "params": { 00:15:22.948 "name": "static" 00:15:22.948 } 00:15:22.948 } 00:15:22.948 ] 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "subsystem": "nvmf", 00:15:22.948 "config": [ 00:15:22.948 { 00:15:22.948 "method": "nvmf_set_config", 00:15:22.948 "params": { 00:15:22.948 "discovery_filter": "match_any", 00:15:22.948 "admin_cmd_passthru": { 00:15:22.948 "identify_ctrlr": false 00:15:22.948 } 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "nvmf_set_max_subsystems", 00:15:22.948 "params": { 00:15:22.948 "max_subsystems": 1024 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "nvmf_set_crdt", 00:15:22.948 "params": { 00:15:22.948 "crdt1": 0, 00:15:22.948 "crdt2": 0, 00:15:22.948 "crdt3": 0 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "nvmf_create_transport", 00:15:22.948 "params": { 00:15:22.948 "trtype": "TCP", 00:15:22.948 "max_queue_depth": 128, 00:15:22.948 "max_io_qpairs_per_ctrlr": 127, 00:15:22.948 "in_capsule_data_size": 4096, 00:15:22.948 "max_io_size": 131072, 00:15:22.948 "io_unit_size": 131072, 00:15:22.948 "max_aq_depth": 128, 00:15:22.948 "num_shared_buffers": 511, 00:15:22.948 "buf_cache_size": 4294967295, 00:15:22.948 "dif_insert_or_strip": false, 00:15:22.948 "zcopy": false, 00:15:22.948 "c2h_success": false, 00:15:22.948 "sock_priority": 0, 00:15:22.948 "abort_timeout_sec": 1, 00:15:22.948 "ack_timeout": 0, 00:15:22.948 "data_wr_pool_size": 0 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "nvmf_create_subsystem", 00:15:22.948 "params": { 00:15:22.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.948 "allow_any_host": false, 00:15:22.948 "serial_number": "00000000000000000000", 00:15:22.948 "model_number": "SPDK bdev Controller", 00:15:22.948 "max_namespaces": 32, 00:15:22.948 "min_cntlid": 1, 00:15:22.948 "max_cntlid": 65519, 00:15:22.948 "ana_reporting": false 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "nvmf_subsystem_add_host", 00:15:22.948 "params": { 00:15:22.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.948 "host": "nqn.2016-06.io.spdk:host1", 00:15:22.948 "psk": "key0" 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "nvmf_subsystem_add_ns", 00:15:22.948 "params": { 00:15:22.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.948 "namespace": { 00:15:22.948 "nsid": 1, 00:15:22.948 "bdev_name": "malloc0", 00:15:22.948 "nguid": "00E08D7884CA4898944720478FFE7B56", 00:15:22.948 "uuid": "00e08d78-84ca-4898-9447-20478ffe7b56", 00:15:22.948 "no_auto_visible": false 00:15:22.948 } 00:15:22.948 } 00:15:22.948 }, 00:15:22.948 { 00:15:22.948 "method": "nvmf_subsystem_add_listener", 00:15:22.948 "params": { 00:15:22.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:22.948 "listen_address": { 00:15:22.948 "trtype": "TCP", 00:15:22.948 "adrfam": "IPv4", 00:15:22.948 "traddr": "10.0.0.2", 00:15:22.948 "trsvcid": "4420" 00:15:22.948 }, 00:15:22.948 "secure_channel": true 00:15:22.948 } 00:15:22.948 } 00:15:22.948 ] 00:15:22.948 } 00:15:22.948 ] 00:15:22.948 }' 00:15:22.948 10:55:38 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:23.206 10:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:15:23.206 "subsystems": [ 00:15:23.206 { 00:15:23.206 "subsystem": "keyring", 00:15:23.206 "config": [ 00:15:23.206 { 00:15:23.206 "method": "keyring_file_add_key", 00:15:23.207 "params": { 00:15:23.207 "name": "key0", 00:15:23.207 "path": "/tmp/tmp.egpszoN2Nk" 00:15:23.207 } 00:15:23.207 } 00:15:23.207 ] 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "subsystem": "iobuf", 00:15:23.207 "config": [ 00:15:23.207 { 00:15:23.207 "method": "iobuf_set_options", 00:15:23.207 "params": { 00:15:23.207 "small_pool_count": 8192, 00:15:23.207 "large_pool_count": 1024, 00:15:23.207 "small_bufsize": 8192, 00:15:23.207 "large_bufsize": 135168 00:15:23.207 } 00:15:23.207 } 00:15:23.207 ] 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "subsystem": "sock", 00:15:23.207 "config": [ 00:15:23.207 { 00:15:23.207 "method": "sock_set_default_impl", 00:15:23.207 "params": { 00:15:23.207 "impl_name": "posix" 00:15:23.207 } 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "method": "sock_impl_set_options", 00:15:23.207 "params": { 00:15:23.207 "impl_name": "ssl", 00:15:23.207 "recv_buf_size": 4096, 00:15:23.207 "send_buf_size": 4096, 00:15:23.207 "enable_recv_pipe": true, 00:15:23.207 "enable_quickack": false, 00:15:23.207 "enable_placement_id": 0, 00:15:23.207 "enable_zerocopy_send_server": true, 00:15:23.207 "enable_zerocopy_send_client": false, 00:15:23.207 "zerocopy_threshold": 0, 00:15:23.207 "tls_version": 0, 00:15:23.207 "enable_ktls": false 00:15:23.207 } 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "method": "sock_impl_set_options", 00:15:23.207 "params": { 00:15:23.207 "impl_name": "posix", 00:15:23.207 "recv_buf_size": 2097152, 00:15:23.207 "send_buf_size": 2097152, 00:15:23.207 "enable_recv_pipe": true, 00:15:23.207 "enable_quickack": false, 00:15:23.207 "enable_placement_id": 0, 00:15:23.207 "enable_zerocopy_send_server": true, 00:15:23.207 "enable_zerocopy_send_client": false, 00:15:23.207 "zerocopy_threshold": 0, 00:15:23.207 "tls_version": 0, 00:15:23.207 "enable_ktls": false 00:15:23.207 } 00:15:23.207 } 00:15:23.207 ] 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "subsystem": "vmd", 00:15:23.207 "config": [] 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "subsystem": "accel", 00:15:23.207 "config": [ 00:15:23.207 { 00:15:23.207 "method": "accel_set_options", 00:15:23.207 "params": { 00:15:23.207 "small_cache_size": 128, 00:15:23.207 "large_cache_size": 16, 00:15:23.207 "task_count": 2048, 00:15:23.207 "sequence_count": 2048, 00:15:23.207 "buf_count": 2048 00:15:23.207 } 00:15:23.207 } 00:15:23.207 ] 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "subsystem": "bdev", 00:15:23.207 "config": [ 00:15:23.207 { 00:15:23.207 "method": "bdev_set_options", 00:15:23.207 "params": { 00:15:23.207 "bdev_io_pool_size": 65535, 00:15:23.207 "bdev_io_cache_size": 256, 00:15:23.207 "bdev_auto_examine": true, 00:15:23.207 "iobuf_small_cache_size": 128, 00:15:23.207 "iobuf_large_cache_size": 16 00:15:23.207 } 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "method": "bdev_raid_set_options", 00:15:23.207 "params": { 00:15:23.207 "process_window_size_kb": 1024 00:15:23.207 } 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "method": "bdev_iscsi_set_options", 00:15:23.207 "params": { 00:15:23.207 "timeout_sec": 30 00:15:23.207 } 00:15:23.207 }, 00:15:23.207 { 00:15:23.207 "method": "bdev_nvme_set_options", 00:15:23.207 "params": { 00:15:23.207 "action_on_timeout": "none", 00:15:23.207 "timeout_us": 0, 00:15:23.207 "timeout_admin_us": 0, 00:15:23.207 "keep_alive_timeout_ms": 10000, 00:15:23.207 "arbitration_burst": 0, 00:15:23.207 "low_priority_weight": 0, 00:15:23.207 "medium_priority_weight": 0, 00:15:23.207 "high_priority_weight": 0, 00:15:23.207 "nvme_adminq_poll_period_us": 10000, 00:15:23.207 "nvme_ioq_poll_period_us": 0, 00:15:23.207 "io_queue_requests": 512, 00:15:23.207 "delay_cmd_submit": true, 00:15:23.207 "transport_retry_count": 4, 00:15:23.207 "bdev_retry_count": 3, 00:15:23.207 "transport_ack_timeout": 0, 00:15:23.207 "ctrlr_loss_timeout_sec": 0, 00:15:23.207 "reconnect_delay_sec": 0, 00:15:23.207 "fast_io_fail_timeout_sec": 0, 00:15:23.207 "disable_auto_failback": false, 00:15:23.207 "generate_uuids": false, 00:15:23.207 "transport_tos": 0, 00:15:23.207 "nvme_error_stat": false, 00:15:23.207 "rdma_srq_size": 0, 00:15:23.207 "io_path_stat": false, 00:15:23.207 "allow_accel_sequence": false, 00:15:23.207 "rdma_max_cq_size": 0, 00:15:23.207 "rdma_cm_event_timeout_ms": 0, 00:15:23.207 "dhchap_digests": [ 00:15:23.207 "sha256", 00:15:23.207 "sha384", 00:15:23.207 "sha512" 00:15:23.207 ], 00:15:23.207 "dhchap_dhgroups": [ 00:15:23.207 "null", 00:15:23.207 "ffdhe2048", 00:15:23.207 "ffdhe3072", 00:15:23.207 "ffdhe4096", 00:15:23.207 "ffdhe6144", 00:15:23.208 "ffdhe8192" 00:15:23.208 ] 00:15:23.208 } 00:15:23.208 }, 00:15:23.208 { 00:15:23.208 "method": "bdev_nvme_attach_controller", 00:15:23.208 "params": { 00:15:23.208 "name": "nvme0", 00:15:23.208 "trtype": "TCP", 00:15:23.208 "adrfam": "IPv4", 00:15:23.208 "traddr": "10.0.0.2", 00:15:23.208 "trsvcid": "4420", 00:15:23.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.208 "prchk_reftag": false, 00:15:23.208 "prchk_guard": false, 00:15:23.208 "ctrlr_loss_timeout_sec": 0, 00:15:23.208 "reconnect_delay_sec": 0, 00:15:23.208 "fast_io_fail_timeout_sec": 0, 00:15:23.208 "psk": "key0", 00:15:23.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:23.208 "hdgst": false, 00:15:23.208 "ddgst": false 00:15:23.208 } 00:15:23.208 }, 00:15:23.208 { 00:15:23.208 "method": "bdev_nvme_set_hotplug", 00:15:23.208 "params": { 00:15:23.208 "period_us": 100000, 00:15:23.208 "enable": false 00:15:23.208 } 00:15:23.208 }, 00:15:23.208 { 00:15:23.208 "method": "bdev_enable_histogram", 00:15:23.208 "params": { 00:15:23.208 "name": "nvme0n1", 00:15:23.208 "enable": true 00:15:23.208 } 00:15:23.208 }, 00:15:23.208 { 00:15:23.208 "method": "bdev_wait_for_examine" 00:15:23.208 } 00:15:23.208 ] 00:15:23.208 }, 00:15:23.208 { 00:15:23.208 "subsystem": "nbd", 00:15:23.208 "config": [] 00:15:23.208 } 00:15:23.208 ] 00:15:23.208 }' 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2805025 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2805025 ']' 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2805025 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2805025 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2805025' 00:15:23.208 killing process with pid 2805025 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2805025 00:15:23.208 Received shutdown signal, test time was about 1.000000 seconds 00:15:23.208 00:15:23.208 Latency(us) 00:15:23.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.208 =================================================================================================================== 00:15:23.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.208 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2805025 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2804872 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2804872 ']' 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2804872 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2804872 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2804872' 00:15:23.466 killing process with pid 2804872 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2804872 00:15:23.466 [2024-05-15 10:55:39.606563] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:23.466 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2804872 00:15:23.723 10:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:15:23.723 10:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:23.723 10:55:39 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:15:23.723 "subsystems": [ 00:15:23.723 { 00:15:23.723 "subsystem": "keyring", 00:15:23.723 "config": [ 00:15:23.723 { 00:15:23.723 "method": "keyring_file_add_key", 00:15:23.723 "params": { 00:15:23.723 "name": "key0", 00:15:23.723 "path": "/tmp/tmp.egpszoN2Nk" 00:15:23.723 } 00:15:23.723 } 00:15:23.723 ] 00:15:23.723 }, 00:15:23.723 { 00:15:23.723 "subsystem": "iobuf", 00:15:23.723 "config": [ 00:15:23.723 { 00:15:23.723 "method": "iobuf_set_options", 00:15:23.723 "params": { 00:15:23.723 "small_pool_count": 8192, 00:15:23.723 "large_pool_count": 1024, 00:15:23.723 "small_bufsize": 8192, 00:15:23.723 "large_bufsize": 135168 00:15:23.723 } 00:15:23.723 } 00:15:23.723 ] 00:15:23.723 }, 00:15:23.723 { 00:15:23.723 "subsystem": "sock", 00:15:23.723 "config": [ 00:15:23.723 { 00:15:23.723 "method": "sock_set_default_impl", 00:15:23.723 "params": { 00:15:23.723 "impl_name": "posix" 00:15:23.723 } 00:15:23.723 }, 00:15:23.723 { 00:15:23.723 "method": "sock_impl_set_options", 00:15:23.723 "params": { 00:15:23.723 "impl_name": "ssl", 00:15:23.723 "recv_buf_size": 4096, 00:15:23.723 "send_buf_size": 4096, 00:15:23.723 "enable_recv_pipe": true, 00:15:23.723 "enable_quickack": false, 00:15:23.723 "enable_placement_id": 0, 00:15:23.723 "enable_zerocopy_send_server": true, 00:15:23.723 "enable_zerocopy_send_client": false, 00:15:23.723 "zerocopy_threshold": 0, 00:15:23.723 "tls_version": 0, 00:15:23.723 "enable_ktls": false 00:15:23.723 } 00:15:23.723 }, 00:15:23.723 { 00:15:23.723 "method": "sock_impl_set_options", 00:15:23.723 "params": { 00:15:23.723 "impl_name": "posix", 00:15:23.723 "recv_buf_size": 2097152, 00:15:23.723 "send_buf_size": 2097152, 00:15:23.723 "enable_recv_pipe": true, 00:15:23.723 "enable_quickack": false, 00:15:23.723 "enable_placement_id": 0, 00:15:23.723 "enable_zerocopy_send_server": true, 00:15:23.723 "enable_zerocopy_send_client": false, 00:15:23.723 "zerocopy_threshold": 0, 00:15:23.723 "tls_version": 0, 00:15:23.723 "enable_ktls": false 00:15:23.723 } 00:15:23.723 } 00:15:23.723 ] 00:15:23.723 }, 00:15:23.723 { 00:15:23.723 "subsystem": "vmd", 00:15:23.723 "config": [] 00:15:23.723 }, 00:15:23.723 { 00:15:23.723 "subsystem": "accel", 00:15:23.723 "config": [ 00:15:23.723 { 00:15:23.723 "method": "accel_set_options", 00:15:23.723 "params": { 00:15:23.723 "small_cache_size": 128, 00:15:23.723 "large_cache_size": 16, 00:15:23.723 "task_count": 2048, 00:15:23.723 "sequence_count": 2048, 00:15:23.723 "buf_count": 2048 00:15:23.723 } 00:15:23.723 } 00:15:23.723 ] 00:15:23.723 }, 00:15:23.723 { 00:15:23.723 "subsystem": "bdev", 00:15:23.723 "config": [ 00:15:23.723 { 00:15:23.723 "method": "bdev_set_options", 00:15:23.723 "params": { 00:15:23.723 "bdev_io_pool_size": 65535, 00:15:23.723 "bdev_io_cache_size": 256, 00:15:23.723 "bdev_auto_examine": true, 00:15:23.724 "iobuf_small_cache_size": 128, 00:15:23.724 "iobuf_large_cache_size": 16 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "bdev_raid_set_options", 00:15:23.724 "params": { 00:15:23.724 "process_window_size_kb": 1024 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "bdev_iscsi_set_options", 00:15:23.724 "params": { 00:15:23.724 "timeout_sec": 30 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "bdev_nvme_set_options", 00:15:23.724 "params": { 00:15:23.724 "action_on_timeout": "none", 00:15:23.724 "timeout_us": 0, 00:15:23.724 "timeout_admin_us": 0, 00:15:23.724 "keep_alive_timeout_ms": 10000, 00:15:23.724 "arbitration_burst": 0, 00:15:23.724 "low_priority_weight": 0, 00:15:23.724 "medium_priority_weight": 0, 00:15:23.724 "high_priority_weight": 0, 00:15:23.724 "nvme_adminq_poll_period_us": 10000, 00:15:23.724 "nvme_ioq_poll_period_us": 0, 00:15:23.724 "io_queue_requests": 0, 00:15:23.724 "delay_cmd_submit": true, 00:15:23.724 "transport_retry_count": 4, 00:15:23.724 "bdev_retry_count": 3, 00:15:23.724 "transport_ack_timeout": 0, 00:15:23.724 "ctrlr_loss_timeout_sec": 0, 00:15:23.724 "reconnect_delay_sec": 0, 00:15:23.724 "fast_io_fail_timeout_sec": 0, 00:15:23.724 "disable_auto_failback": false, 00:15:23.724 "generate_uuids": false, 00:15:23.724 "transport_tos": 0, 00:15:23.724 "nvme_error_stat": false, 00:15:23.724 "rdma_srq_size": 0, 00:15:23.724 "io_path_stat": false, 00:15:23.724 "allow_accel_sequence": false, 00:15:23.724 "rdma_max_cq_size": 0, 00:15:23.724 "rdma_cm_event_timeout_ms": 0, 00:15:23.724 "dhchap_digests": [ 00:15:23.724 "sha256", 00:15:23.724 "sha384", 00:15:23.724 "sha512" 00:15:23.724 ], 00:15:23.724 "dhchap_dhgroups": [ 00:15:23.724 "null", 00:15:23.724 "ffdhe2048", 00:15:23.724 "ffdhe3072", 00:15:23.724 "ffdhe4096", 00:15:23.724 "ffdhe6144", 00:15:23.724 "ffdhe8192" 00:15:23.724 ] 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "bdev_nvme_set_hotplug", 00:15:23.724 "params": { 00:15:23.724 "period_us": 100000, 00:15:23.724 "enable": false 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "bdev_malloc_create", 00:15:23.724 "params": { 00:15:23.724 "name": "malloc0", 00:15:23.724 "num_blocks": 8192, 00:15:23.724 "block_size": 4096, 00:15:23.724 "physical_block_size": 4096, 00:15:23.724 "uuid": "00e08d78-84ca-4898-9447-20478ffe7b56", 00:15:23.724 "optimal_io_boundary": 0 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "bdev_wait_for_examine" 00:15:23.724 } 00:15:23.724 ] 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "subsystem": "nbd", 00:15:23.724 "config": [] 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "subsystem": "scheduler", 00:15:23.724 "config": [ 00:15:23.724 { 00:15:23.724 "method": "framework_set_scheduler", 00:15:23.724 "params": { 00:15:23.724 "name": "static" 00:15:23.724 } 00:15:23.724 } 00:15:23.724 ] 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "subsystem": "nvmf", 00:15:23.724 "config": [ 00:15:23.724 { 00:15:23.724 "method": "nvmf_set_config", 00:15:23.724 "params": { 00:15:23.724 "discovery_filter": "match_any", 00:15:23.724 "admin_cmd_passthru": { 00:15:23.724 "identify_ctrlr": false 00:15:23.724 } 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "nvmf_set_max_subsystems", 00:15:23.724 "params": { 00:15:23.724 "max_subsystems": 1024 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "nvmf_set_crdt", 00:15:23.724 "params": { 00:15:23.724 "crdt1": 0, 00:15:23.724 "crdt2": 0, 00:15:23.724 "crdt3": 0 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "nvmf_create_transport", 00:15:23.724 "params": { 00:15:23.724 "trtype": "TCP", 00:15:23.724 "max_queue_depth": 128, 00:15:23.724 "max_io_qpairs_per_ctrlr": 127, 00:15:23.724 "in_capsule_data_size": 4096, 00:15:23.724 "max_io_size": 131072, 00:15:23.724 "io_unit_size": 131072, 00:15:23.724 "max_aq_depth": 128, 00:15:23.724 "num_shared_buffers": 511, 00:15:23.724 "buf_cache_size": 4294967295, 00:15:23.724 "dif_insert_or_strip": false, 00:15:23.724 "zcopy": false, 00:15:23.724 "c2h_success": false, 00:15:23.724 "sock_priority": 0, 00:15:23.724 "abort_timeout_sec": 1, 00:15:23.724 "ack_timeout": 0, 00:15:23.724 "data_wr_pool_size": 0 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "nvmf_create_subsystem", 00:15:23.724 "params": { 00:15:23.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.724 "allow_any_host": false, 00:15:23.724 "serial_number": "00000000000000000000", 00:15:23.724 "model_number": "SPDK bdev Controller", 00:15:23.724 "max_namespaces": 32, 00:15:23.724 "min_cntlid": 1, 00:15:23.724 "max_cntlid": 65519, 00:15:23.724 "ana_reporting": false 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "nvmf_subsystem_add_host", 00:15:23.724 "params": { 00:15:23.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.724 "host": "nqn.2016-06.io.spdk:host1", 00:15:23.724 "psk": "key0" 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "nvmf_subsystem_add_ns", 00:15:23.724 "params": { 00:15:23.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.724 "namespace": { 00:15:23.724 "nsid": 1, 00:15:23.724 "bdev_name": "malloc0", 00:15:23.724 "nguid": "00E08D7884CA4898944720478FFE7B56", 00:15:23.724 "uuid": "00e08d78-84ca-4898-9447-20478ffe7b56", 00:15:23.724 "no_auto_visible": false 00:15:23.724 } 00:15:23.724 } 00:15:23.724 }, 00:15:23.724 { 00:15:23.724 "method": "nvmf_subsystem_add_listener", 00:15:23.724 "params": { 00:15:23.724 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:23.724 "listen_address": { 00:15:23.724 "trtype": "TCP", 00:15:23.724 "adrfam": "IPv4", 00:15:23.724 "traddr": "10.0.0.2", 00:15:23.724 "trsvcid": "4420" 00:15:23.724 }, 00:15:23.724 "secure_channel": true 00:15:23.724 } 00:15:23.724 } 00:15:23.724 ] 00:15:23.724 } 00:15:23.724 ] 00:15:23.724 }' 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2805559 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2805559 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2805559 ']' 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:23.724 10:55:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:23.724 [2024-05-15 10:55:39.954773] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:23.724 [2024-05-15 10:55:39.954845] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.982 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.982 [2024-05-15 10:55:40.033128] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.982 [2024-05-15 10:55:40.159023] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.982 [2024-05-15 10:55:40.159087] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.982 [2024-05-15 10:55:40.159101] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:23.982 [2024-05-15 10:55:40.159113] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:23.982 [2024-05-15 10:55:40.159124] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.982 [2024-05-15 10:55:40.159217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.240 [2024-05-15 10:55:40.406152] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:24.240 [2024-05-15 10:55:40.438144] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:24.240 [2024-05-15 10:55:40.438223] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:24.240 [2024-05-15 10:55:40.452143] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2805712 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2805712 /var/tmp/bdevperf.sock 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@827 -- # '[' -z 2805712 ']' 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:24.806 10:55:40 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:15:24.806 "subsystems": [ 00:15:24.806 { 00:15:24.806 "subsystem": "keyring", 00:15:24.806 "config": [ 00:15:24.806 { 00:15:24.806 "method": "keyring_file_add_key", 00:15:24.806 "params": { 00:15:24.806 "name": "key0", 00:15:24.806 "path": "/tmp/tmp.egpszoN2Nk" 00:15:24.806 } 00:15:24.806 } 00:15:24.806 ] 00:15:24.806 }, 00:15:24.806 { 00:15:24.806 "subsystem": "iobuf", 00:15:24.806 "config": [ 00:15:24.806 { 00:15:24.806 "method": "iobuf_set_options", 00:15:24.806 "params": { 00:15:24.806 "small_pool_count": 8192, 00:15:24.806 "large_pool_count": 1024, 00:15:24.806 "small_bufsize": 8192, 00:15:24.806 "large_bufsize": 135168 00:15:24.806 } 00:15:24.806 } 00:15:24.806 ] 00:15:24.806 }, 00:15:24.806 { 00:15:24.806 "subsystem": "sock", 00:15:24.806 "config": [ 00:15:24.806 { 00:15:24.806 "method": "sock_set_default_impl", 00:15:24.806 "params": { 00:15:24.806 "impl_name": "posix" 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "sock_impl_set_options", 00:15:24.807 "params": { 00:15:24.807 "impl_name": "ssl", 00:15:24.807 "recv_buf_size": 4096, 00:15:24.807 "send_buf_size": 4096, 00:15:24.807 "enable_recv_pipe": true, 00:15:24.807 "enable_quickack": false, 00:15:24.807 "enable_placement_id": 0, 00:15:24.807 "enable_zerocopy_send_server": true, 00:15:24.807 "enable_zerocopy_send_client": false, 00:15:24.807 "zerocopy_threshold": 0, 00:15:24.807 "tls_version": 0, 00:15:24.807 "enable_ktls": false 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "sock_impl_set_options", 00:15:24.807 "params": { 00:15:24.807 "impl_name": "posix", 00:15:24.807 "recv_buf_size": 2097152, 00:15:24.807 "send_buf_size": 2097152, 00:15:24.807 "enable_recv_pipe": true, 00:15:24.807 "enable_quickack": false, 00:15:24.807 "enable_placement_id": 0, 00:15:24.807 "enable_zerocopy_send_server": true, 00:15:24.807 "enable_zerocopy_send_client": false, 00:15:24.807 "zerocopy_threshold": 0, 00:15:24.807 "tls_version": 0, 00:15:24.807 "enable_ktls": false 00:15:24.807 } 00:15:24.807 } 00:15:24.807 ] 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "subsystem": "vmd", 00:15:24.807 "config": [] 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "subsystem": "accel", 00:15:24.807 "config": [ 00:15:24.807 { 00:15:24.807 "method": "accel_set_options", 00:15:24.807 "params": { 00:15:24.807 "small_cache_size": 128, 00:15:24.807 "large_cache_size": 16, 00:15:24.807 "task_count": 2048, 00:15:24.807 "sequence_count": 2048, 00:15:24.807 "buf_count": 2048 00:15:24.807 } 00:15:24.807 } 00:15:24.807 ] 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "subsystem": "bdev", 00:15:24.807 "config": [ 00:15:24.807 { 00:15:24.807 "method": "bdev_set_options", 00:15:24.807 "params": { 00:15:24.807 "bdev_io_pool_size": 65535, 00:15:24.807 "bdev_io_cache_size": 256, 00:15:24.807 "bdev_auto_examine": true, 00:15:24.807 "iobuf_small_cache_size": 128, 00:15:24.807 "iobuf_large_cache_size": 16 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "bdev_raid_set_options", 00:15:24.807 "params": { 00:15:24.807 "process_window_size_kb": 1024 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "bdev_iscsi_set_options", 00:15:24.807 "params": { 00:15:24.807 "timeout_sec": 30 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "bdev_nvme_set_options", 00:15:24.807 "params": { 00:15:24.807 "action_on_timeout": "none", 00:15:24.807 "timeout_us": 0, 00:15:24.807 "timeout_admin_us": 0, 00:15:24.807 "keep_alive_timeout_ms": 10000, 00:15:24.807 "arbitration_burst": 0, 00:15:24.807 "low_priority_weight": 0, 00:15:24.807 "medium_priority_weight": 0, 00:15:24.807 "high_priority_weight": 0, 00:15:24.807 "nvme_adminq_poll_period_us": 10000, 00:15:24.807 "nvme_ioq_poll_period_us": 0, 00:15:24.807 "io_queue_requests": 512, 00:15:24.807 "delay_cmd_submit": true, 00:15:24.807 "transport_retry_count": 4, 00:15:24.807 "bdev_retry_count": 3, 00:15:24.807 "transport_ack_timeout": 0, 00:15:24.807 "ctrlr_loss_timeout_sec": 0, 00:15:24.807 "reconnect_delay_sec": 0, 00:15:24.807 "fast_io_fail_timeout_sec": 0, 00:15:24.807 "disable_auto_failback": false, 00:15:24.807 "generate_uuids": false, 00:15:24.807 "transport_tos": 0, 00:15:24.807 "nvme_error_stat": false, 00:15:24.807 "rdma_srq_size": 0, 00:15:24.807 "io_path_stat": false, 00:15:24.807 "allow_accel_sequence": false, 00:15:24.807 "rdma_max_cq_size": 0, 00:15:24.807 "rdma_cm_event_timeout_ms": 0, 00:15:24.807 "dhchap_digests": [ 00:15:24.807 "sha256", 00:15:24.807 "sha384", 00:15:24.807 "shWaiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:24.807 a512" 00:15:24.807 ], 00:15:24.807 "dhchap_dhgroups": [ 00:15:24.807 "null", 00:15:24.807 "ffdhe2048", 00:15:24.807 "ffdhe3072", 00:15:24.807 "ffdhe4096", 00:15:24.807 "ffdhe6144", 00:15:24.807 "ffdhe8192" 00:15:24.807 ] 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "bdev_nvme_attach_controller", 00:15:24.807 "params": { 00:15:24.807 "name": "nvme0", 00:15:24.807 "trtype": "TCP", 00:15:24.807 "adrfam": "IPv4", 00:15:24.807 "traddr": "10.0.0.2", 00:15:24.807 "trsvcid": "4420", 00:15:24.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.807 "prchk_reftag": false, 00:15:24.807 "prchk_guard": false, 00:15:24.807 "ctrlr_loss_timeout_sec": 0, 00:15:24.807 "reconnect_delay_sec": 0, 00:15:24.807 "fast_io_fail_timeout_sec": 0, 00:15:24.807 "psk": "key0", 00:15:24.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.807 "hdgst": false, 00:15:24.807 "ddgst": false 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "bdev_nvme_set_hotplug", 00:15:24.807 "params": { 00:15:24.807 "period_us": 100000, 00:15:24.807 "enable": false 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "bdev_enable_histogram", 00:15:24.807 "params": { 00:15:24.807 "name": "nvme0n1", 00:15:24.807 "enable": true 00:15:24.807 } 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "method": "bdev_wait_for_examine" 00:15:24.807 } 00:15:24.807 ] 00:15:24.807 }, 00:15:24.807 { 00:15:24.807 "subsystem": "nbd", 00:15:24.807 "config": [] 00:15:24.807 } 00:15:24.807 ] 00:15:24.807 }' 00:15:24.807 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:24.807 10:55:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:24.807 [2024-05-15 10:55:41.032348] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:24.807 [2024-05-15 10:55:41.032441] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2805712 ] 00:15:25.067 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.067 [2024-05-15 10:55:41.104806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.067 [2024-05-15 10:55:41.220272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.325 [2024-05-15 10:55:41.407986] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:25.891 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:25.891 10:55:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@860 -- # return 0 00:15:25.891 10:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:25.891 10:55:41 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:15:26.181 10:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.181 10:55:42 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:26.181 Running I/O for 1 seconds... 00:15:27.561 00:15:27.561 Latency(us) 00:15:27.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.561 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:27.561 Verification LBA range: start 0x0 length 0x2000 00:15:27.561 nvme0n1 : 1.08 941.97 3.68 0.00 0.00 131903.16 11893.57 177092.84 00:15:27.561 =================================================================================================================== 00:15:27.561 Total : 941.97 3.68 0.00 0.00 131903.16 11893.57 177092.84 00:15:27.561 0 00:15:27.561 10:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:15:27.561 10:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:15:27.561 10:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:27.561 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@804 -- # type=--id 00:15:27.561 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # id=0 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@816 -- # for n in $shm_files 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:27.562 nvmf_trace.0 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # return 0 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2805712 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2805712 ']' 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2805712 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2805712 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2805712' 00:15:27.562 killing process with pid 2805712 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2805712 00:15:27.562 Received shutdown signal, test time was about 1.000000 seconds 00:15:27.562 00:15:27.562 Latency(us) 00:15:27.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.562 =================================================================================================================== 00:15:27.562 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.562 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2805712 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:27.820 rmmod nvme_tcp 00:15:27.820 rmmod nvme_fabrics 00:15:27.820 rmmod nvme_keyring 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2805559 ']' 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2805559 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@946 -- # '[' -z 2805559 ']' 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@950 -- # kill -0 2805559 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # uname 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2805559 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2805559' 00:15:27.820 killing process with pid 2805559 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # kill 2805559 00:15:27.820 [2024-05-15 10:55:43.887447] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:27.820 10:55:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@970 -- # wait 2805559 00:15:28.080 10:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:28.080 10:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:28.080 10:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:28.080 10:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:28.080 10:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:28.080 10:55:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.080 10:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.080 10:55:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:29.985 10:55:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:29.985 10:55:46 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.sNwsB23NDe /tmp/tmp.EqoIvoBvjV /tmp/tmp.egpszoN2Nk 00:15:29.985 00:15:29.985 real 1m23.273s 00:15:29.985 user 2m9.363s 00:15:29.985 sys 0m29.482s 00:15:29.985 10:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:29.985 10:55:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.985 ************************************ 00:15:29.985 END TEST nvmf_tls 00:15:29.985 ************************************ 00:15:30.245 10:55:46 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:30.245 10:55:46 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:30.245 10:55:46 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:30.245 10:55:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:30.245 ************************************ 00:15:30.245 START TEST nvmf_fips 00:15:30.245 ************************************ 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:30.245 * Looking for test storage... 00:15:30.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:15:30.245 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:15:30.246 Error setting digest 00:15:30.246 00826F8DA77F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:15:30.246 00826F8DA77F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:30.246 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:30.504 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:30.504 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:30.504 10:55:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:15:30.504 10:55:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:33.039 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:33.040 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:33.040 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:33.040 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:33.040 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:33.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:33.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:15:33.040 00:15:33.040 --- 10.0.0.2 ping statistics --- 00:15:33.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.040 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:33.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:33.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:15:33.040 00:15:33.040 --- 10.0.0.1 ping statistics --- 00:15:33.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:33.040 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@720 -- # xtrace_disable 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2808368 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2808368 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2808368 ']' 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:33.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:33.040 10:55:49 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:33.040 [2024-05-15 10:55:49.252406] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:33.040 [2024-05-15 10:55:49.252518] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:33.299 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.299 [2024-05-15 10:55:49.333716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.299 [2024-05-15 10:55:49.448410] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:33.299 [2024-05-15 10:55:49.448481] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:33.299 [2024-05-15 10:55:49.448499] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:33.300 [2024-05-15 10:55:49.448512] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:33.300 [2024-05-15 10:55:49.448524] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:33.300 [2024-05-15 10:55:49.448557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.235 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:34.235 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:15:34.235 10:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:34.235 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:34.235 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:34.235 10:55:50 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:34.236 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:15:34.236 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:34.236 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:34.236 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:34.236 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:34.236 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:34.236 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:34.236 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:34.494 [2024-05-15 10:55:50.491370] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:34.494 [2024-05-15 10:55:50.507287] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:34.494 [2024-05-15 10:55:50.507353] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:34.494 [2024-05-15 10:55:50.507580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:34.494 [2024-05-15 10:55:50.539507] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:15:34.494 malloc0 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2808523 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2808523 /var/tmp/bdevperf.sock 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@827 -- # '[' -z 2808523 ']' 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local max_retries=100 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:34.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # xtrace_disable 00:15:34.494 10:55:50 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:34.494 [2024-05-15 10:55:50.630921] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:15:34.494 [2024-05-15 10:55:50.631029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2808523 ] 00:15:34.494 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.494 [2024-05-15 10:55:50.701928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:34.753 [2024-05-15 10:55:50.810072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:35.687 10:55:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:15:35.687 10:55:51 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@860 -- # return 0 00:15:35.687 10:55:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:35.687 [2024-05-15 10:55:51.832748] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:35.687 [2024-05-15 10:55:51.832881] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:15:35.687 TLSTESTn1 00:15:35.945 10:55:51 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:35.945 Running I/O for 10 seconds... 00:15:45.917 00:15:45.917 Latency(us) 00:15:45.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.917 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:45.917 Verification LBA range: start 0x0 length 0x2000 00:15:45.917 TLSTESTn1 : 10.08 1267.83 4.95 0.00 0.00 100611.63 6213.78 160004.93 00:15:45.917 =================================================================================================================== 00:15:45.917 Total : 1267.83 4.95 0.00 0.00 100611.63 6213.78 160004.93 00:15:45.917 0 00:15:45.917 10:56:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:45.917 10:56:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:45.917 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@804 -- # type=--id 00:15:45.917 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # id=0 00:15:45.917 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # '[' --id = --pid ']' 00:15:45.917 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@810 -- # shm_files=nvmf_trace.0 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # [[ -z nvmf_trace.0 ]] 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@816 -- # for n in $shm_files 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:46.182 nvmf_trace.0 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # return 0 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2808523 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2808523 ']' 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2808523 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2808523 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2808523' 00:15:46.182 killing process with pid 2808523 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2808523 00:15:46.182 Received shutdown signal, test time was about 10.000000 seconds 00:15:46.182 00:15:46.182 Latency(us) 00:15:46.182 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.182 =================================================================================================================== 00:15:46.182 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:46.182 [2024-05-15 10:56:02.244982] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:15:46.182 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2808523 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:46.440 rmmod nvme_tcp 00:15:46.440 rmmod nvme_fabrics 00:15:46.440 rmmod nvme_keyring 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2808368 ']' 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2808368 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@946 -- # '[' -z 2808368 ']' 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@950 -- # kill -0 2808368 00:15:46.440 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # uname 00:15:46.441 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:15:46.441 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2808368 00:15:46.441 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:15:46.441 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:15:46.441 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2808368' 00:15:46.441 killing process with pid 2808368 00:15:46.441 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # kill 2808368 00:15:46.441 [2024-05-15 10:56:02.611061] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:46.441 [2024-05-15 10:56:02.611119] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:15:46.441 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@970 -- # wait 2808368 00:15:46.699 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:46.699 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:46.699 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:46.699 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:46.699 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:46.699 10:56:02 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:46.699 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:46.699 10:56:02 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:49.232 10:56:04 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:49.232 10:56:04 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:15:49.232 00:15:49.232 real 0m18.698s 00:15:49.232 user 0m23.459s 00:15:49.232 sys 0m6.917s 00:15:49.232 10:56:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # xtrace_disable 00:15:49.232 10:56:04 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:49.232 ************************************ 00:15:49.232 END TEST nvmf_fips 00:15:49.232 ************************************ 00:15:49.232 10:56:04 nvmf_tcp -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:15:49.232 10:56:04 nvmf_tcp -- nvmf/nvmf.sh@70 -- # [[ phy == phy ]] 00:15:49.232 10:56:04 nvmf_tcp -- nvmf/nvmf.sh@71 -- # '[' tcp = tcp ']' 00:15:49.232 10:56:04 nvmf_tcp -- nvmf/nvmf.sh@72 -- # gather_supported_nvmf_pci_devs 00:15:49.232 10:56:04 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:15:49.232 10:56:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:51.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:51.822 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:51.822 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:51.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/nvmf.sh@73 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/nvmf.sh@74 -- # (( 2 > 0 )) 00:15:51.822 10:56:07 nvmf_tcp -- nvmf/nvmf.sh@75 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:51.822 10:56:07 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:15:51.822 10:56:07 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:15:51.822 10:56:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:51.822 ************************************ 00:15:51.822 START TEST nvmf_perf_adq 00:15:51.823 ************************************ 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:15:51.823 * Looking for test storage... 00:15:51.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:15:51.823 10:56:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:53.727 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:53.727 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:53.727 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:53.727 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:15:53.727 10:56:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:15:54.662 10:56:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:15:56.032 10:56:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:01.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:01.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:01.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:01.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:01.303 10:56:16 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:01.303 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:01.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:16:01.304 00:16:01.304 --- 10.0.0.2 ping statistics --- 00:16:01.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.304 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:01.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:16:01.304 00:16:01.304 --- 10.0.0.1 ping statistics --- 00:16:01.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.304 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2815095 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2815095 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2815095 ']' 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:01.304 10:56:17 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:01.304 [2024-05-15 10:56:17.214461] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:01.304 [2024-05-15 10:56:17.214545] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.304 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.304 [2024-05-15 10:56:17.301848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:01.304 [2024-05-15 10:56:17.420917] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.304 [2024-05-15 10:56:17.420979] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.304 [2024-05-15 10:56:17.421003] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.304 [2024-05-15 10:56:17.421016] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.304 [2024-05-15 10:56:17.421029] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.304 [2024-05-15 10:56:17.421337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.304 [2024-05-15 10:56:17.421394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:01.304 [2024-05-15 10:56:17.421511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:01.304 [2024-05-15 10:56:17.421629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 [2024-05-15 10:56:18.334949] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 Malloc1 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:02.239 [2024-05-15 10:56:18.386142] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:02.239 [2024-05-15 10:56:18.386437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2815264 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:16:02.239 10:56:18 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:02.239 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:16:04.771 "tick_rate": 2700000000, 00:16:04.771 "poll_groups": [ 00:16:04.771 { 00:16:04.771 "name": "nvmf_tgt_poll_group_000", 00:16:04.771 "admin_qpairs": 1, 00:16:04.771 "io_qpairs": 1, 00:16:04.771 "current_admin_qpairs": 1, 00:16:04.771 "current_io_qpairs": 1, 00:16:04.771 "pending_bdev_io": 0, 00:16:04.771 "completed_nvme_io": 20673, 00:16:04.771 "transports": [ 00:16:04.771 { 00:16:04.771 "trtype": "TCP" 00:16:04.771 } 00:16:04.771 ] 00:16:04.771 }, 00:16:04.771 { 00:16:04.771 "name": "nvmf_tgt_poll_group_001", 00:16:04.771 "admin_qpairs": 0, 00:16:04.771 "io_qpairs": 1, 00:16:04.771 "current_admin_qpairs": 0, 00:16:04.771 "current_io_qpairs": 1, 00:16:04.771 "pending_bdev_io": 0, 00:16:04.771 "completed_nvme_io": 20888, 00:16:04.771 "transports": [ 00:16:04.771 { 00:16:04.771 "trtype": "TCP" 00:16:04.771 } 00:16:04.771 ] 00:16:04.771 }, 00:16:04.771 { 00:16:04.771 "name": "nvmf_tgt_poll_group_002", 00:16:04.771 "admin_qpairs": 0, 00:16:04.771 "io_qpairs": 1, 00:16:04.771 "current_admin_qpairs": 0, 00:16:04.771 "current_io_qpairs": 1, 00:16:04.771 "pending_bdev_io": 0, 00:16:04.771 "completed_nvme_io": 14551, 00:16:04.771 "transports": [ 00:16:04.771 { 00:16:04.771 "trtype": "TCP" 00:16:04.771 } 00:16:04.771 ] 00:16:04.771 }, 00:16:04.771 { 00:16:04.771 "name": "nvmf_tgt_poll_group_003", 00:16:04.771 "admin_qpairs": 0, 00:16:04.771 "io_qpairs": 1, 00:16:04.771 "current_admin_qpairs": 0, 00:16:04.771 "current_io_qpairs": 1, 00:16:04.771 "pending_bdev_io": 0, 00:16:04.771 "completed_nvme_io": 20694, 00:16:04.771 "transports": [ 00:16:04.771 { 00:16:04.771 "trtype": "TCP" 00:16:04.771 } 00:16:04.771 ] 00:16:04.771 } 00:16:04.771 ] 00:16:04.771 }' 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:16:04.771 10:56:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2815264 00:16:12.883 Initializing NVMe Controllers 00:16:12.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:12.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:12.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:12.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:12.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:12.883 Initialization complete. Launching workers. 00:16:12.883 ======================================================== 00:16:12.883 Latency(us) 00:16:12.883 Device Information : IOPS MiB/s Average min max 00:16:12.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10651.60 41.61 6008.99 1804.14 10114.68 00:16:12.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10797.90 42.18 5927.34 1700.92 8940.31 00:16:12.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7539.60 29.45 8491.80 2983.78 14307.07 00:16:12.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10656.60 41.63 6005.97 2038.33 9250.74 00:16:12.883 ======================================================== 00:16:12.883 Total : 39645.70 154.87 6458.11 1700.92 14307.07 00:16:12.883 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.883 rmmod nvme_tcp 00:16:12.883 rmmod nvme_fabrics 00:16:12.883 rmmod nvme_keyring 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2815095 ']' 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2815095 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2815095 ']' 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2815095 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2815095 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2815095' 00:16:12.883 killing process with pid 2815095 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2815095 00:16:12.883 [2024-05-15 10:56:28.570119] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2815095 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.883 10:56:28 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.787 10:56:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:14.787 10:56:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:16:14.787 10:56:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:16:15.353 10:56:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:16:17.282 10:56:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.562 10:56:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:22.562 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:22.562 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:22.562 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:22.562 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:22.562 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:22.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:22.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:16:22.563 00:16:22.563 --- 10.0.0.2 ping statistics --- 00:16:22.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.563 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:22.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:22.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:16:22.563 00:16:22.563 --- 10.0.0.1 ping statistics --- 00:16:22.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:22.563 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:16:22.563 net.core.busy_poll = 1 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:16:22.563 net.core.busy_read = 1 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2817756 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2817756 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@827 -- # '[' -z 2817756 ']' 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:22.563 10:56:38 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:22.563 [2024-05-15 10:56:38.384982] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:22.563 [2024-05-15 10:56:38.385072] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:22.563 EAL: No free 2048 kB hugepages reported on node 1 00:16:22.563 [2024-05-15 10:56:38.465852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:22.563 [2024-05-15 10:56:38.585427] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:22.563 [2024-05-15 10:56:38.585494] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:22.563 [2024-05-15 10:56:38.585531] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:22.563 [2024-05-15 10:56:38.585542] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:22.563 [2024-05-15 10:56:38.585552] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:22.563 [2024-05-15 10:56:38.585652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:22.563 [2024-05-15 10:56:38.585716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:22.563 [2024-05-15 10:56:38.585775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:22.563 [2024-05-15 10:56:38.585779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@860 -- # return 0 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 [2024-05-15 10:56:39.563945] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 Malloc1 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:23.498 [2024-05-15 10:56:39.617652] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:23.498 [2024-05-15 10:56:39.618014] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.498 10:56:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.499 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2817921 00:16:23.499 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:16:23.499 10:56:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:16:23.499 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.399 10:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:16:25.399 10:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.400 10:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:25.656 10:56:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.657 10:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:16:25.657 "tick_rate": 2700000000, 00:16:25.657 "poll_groups": [ 00:16:25.657 { 00:16:25.657 "name": "nvmf_tgt_poll_group_000", 00:16:25.657 "admin_qpairs": 1, 00:16:25.657 "io_qpairs": 2, 00:16:25.657 "current_admin_qpairs": 1, 00:16:25.657 "current_io_qpairs": 2, 00:16:25.657 "pending_bdev_io": 0, 00:16:25.657 "completed_nvme_io": 25156, 00:16:25.657 "transports": [ 00:16:25.657 { 00:16:25.657 "trtype": "TCP" 00:16:25.657 } 00:16:25.657 ] 00:16:25.657 }, 00:16:25.657 { 00:16:25.657 "name": "nvmf_tgt_poll_group_001", 00:16:25.657 "admin_qpairs": 0, 00:16:25.657 "io_qpairs": 2, 00:16:25.657 "current_admin_qpairs": 0, 00:16:25.657 "current_io_qpairs": 2, 00:16:25.657 "pending_bdev_io": 0, 00:16:25.657 "completed_nvme_io": 26044, 00:16:25.657 "transports": [ 00:16:25.657 { 00:16:25.657 "trtype": "TCP" 00:16:25.657 } 00:16:25.657 ] 00:16:25.657 }, 00:16:25.657 { 00:16:25.657 "name": "nvmf_tgt_poll_group_002", 00:16:25.657 "admin_qpairs": 0, 00:16:25.657 "io_qpairs": 0, 00:16:25.657 "current_admin_qpairs": 0, 00:16:25.657 "current_io_qpairs": 0, 00:16:25.657 "pending_bdev_io": 0, 00:16:25.657 "completed_nvme_io": 0, 00:16:25.657 "transports": [ 00:16:25.657 { 00:16:25.657 "trtype": "TCP" 00:16:25.657 } 00:16:25.657 ] 00:16:25.657 }, 00:16:25.657 { 00:16:25.657 "name": "nvmf_tgt_poll_group_003", 00:16:25.657 "admin_qpairs": 0, 00:16:25.657 "io_qpairs": 0, 00:16:25.657 "current_admin_qpairs": 0, 00:16:25.657 "current_io_qpairs": 0, 00:16:25.657 "pending_bdev_io": 0, 00:16:25.657 "completed_nvme_io": 0, 00:16:25.657 "transports": [ 00:16:25.657 { 00:16:25.657 "trtype": "TCP" 00:16:25.657 } 00:16:25.657 ] 00:16:25.657 } 00:16:25.657 ] 00:16:25.657 }' 00:16:25.657 10:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:16:25.657 10:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:16:25.657 10:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:16:25.657 10:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:16:25.657 10:56:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2817921 00:16:33.765 Initializing NVMe Controllers 00:16:33.765 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:33.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:16:33.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:16:33.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:16:33.765 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:16:33.765 Initialization complete. Launching workers. 00:16:33.765 ======================================================== 00:16:33.765 Latency(us) 00:16:33.765 Device Information : IOPS MiB/s Average min max 00:16:33.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6620.20 25.86 9672.69 1921.30 55033.26 00:16:33.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6295.00 24.59 10200.15 1760.80 55229.30 00:16:33.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6709.40 26.21 9543.24 1719.12 55922.70 00:16:33.765 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7100.30 27.74 9015.81 1678.46 53415.47 00:16:33.765 ======================================================== 00:16:33.765 Total : 26724.90 104.39 9589.91 1678.46 55922.70 00:16:33.765 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.765 rmmod nvme_tcp 00:16:33.765 rmmod nvme_fabrics 00:16:33.765 rmmod nvme_keyring 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2817756 ']' 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2817756 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@946 -- # '[' -z 2817756 ']' 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@950 -- # kill -0 2817756 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # uname 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2817756 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2817756' 00:16:33.765 killing process with pid 2817756 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@965 -- # kill 2817756 00:16:33.765 [2024-05-15 10:56:49.880815] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:33.765 10:56:49 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@970 -- # wait 2817756 00:16:34.023 10:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:34.023 10:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:34.023 10:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:34.023 10:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:34.023 10:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:34.023 10:56:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.023 10:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.023 10:56:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.311 10:56:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:37.311 10:56:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:37.311 00:16:37.311 real 0m45.706s 00:16:37.311 user 2m34.736s 00:16:37.311 sys 0m13.892s 00:16:37.311 10:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:37.311 10:56:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 ************************************ 00:16:37.311 END TEST nvmf_perf_adq 00:16:37.311 ************************************ 00:16:37.311 10:56:53 nvmf_tcp -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:37.311 10:56:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:16:37.311 10:56:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:37.311 10:56:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:37.311 ************************************ 00:16:37.311 START TEST nvmf_shutdown 00:16:37.311 ************************************ 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:16:37.311 * Looking for test storage... 00:16:37.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.311 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:37.312 ************************************ 00:16:37.312 START TEST nvmf_shutdown_tc1 00:16:37.312 ************************************ 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc1 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:37.312 10:56:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:39.844 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:39.845 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:39.845 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:39.845 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:39.845 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:39.845 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:39.846 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:39.846 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:16:39.846 00:16:39.846 --- 10.0.0.2 ping statistics --- 00:16:39.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.846 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:39.846 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:39.846 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:16:39.846 00:16:39.846 --- 10.0.0.1 ping statistics --- 00:16:39.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:39.846 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2821620 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2821620 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2821620 ']' 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:39.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:39.846 10:56:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:39.846 [2024-05-15 10:56:56.020896] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:39.846 [2024-05-15 10:56:56.020995] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:39.846 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.105 [2024-05-15 10:56:56.097446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:40.105 [2024-05-15 10:56:56.209003] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.105 [2024-05-15 10:56:56.209061] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.105 [2024-05-15 10:56:56.209075] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.105 [2024-05-15 10:56:56.209086] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.105 [2024-05-15 10:56:56.209096] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.105 [2024-05-15 10:56:56.209182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:40.105 [2024-05-15 10:56:56.209213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:40.105 [2024-05-15 10:56:56.209275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:40.105 [2024-05-15 10:56:56.209277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.038 10:56:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:41.038 [2024-05-15 10:56:56.999643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.038 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.039 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:41.039 Malloc1 00:16:41.039 [2024-05-15 10:56:57.088376] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:41.039 [2024-05-15 10:56:57.088663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:41.039 Malloc2 00:16:41.039 Malloc3 00:16:41.039 Malloc4 00:16:41.039 Malloc5 00:16:41.298 Malloc6 00:16:41.298 Malloc7 00:16:41.298 Malloc8 00:16:41.298 Malloc9 00:16:41.298 Malloc10 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2821810 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2821810 /var/tmp/bdevperf.sock 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@827 -- # '[' -z 2821810 ']' 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:41.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.558 { 00:16:41.558 "params": { 00:16:41.558 "name": "Nvme$subsystem", 00:16:41.558 "trtype": "$TEST_TRANSPORT", 00:16:41.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.558 "adrfam": "ipv4", 00:16:41.558 "trsvcid": "$NVMF_PORT", 00:16:41.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.558 "hdgst": ${hdgst:-false}, 00:16:41.558 "ddgst": ${ddgst:-false} 00:16:41.558 }, 00:16:41.558 "method": "bdev_nvme_attach_controller" 00:16:41.558 } 00:16:41.558 EOF 00:16:41.558 )") 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.558 { 00:16:41.558 "params": { 00:16:41.558 "name": "Nvme$subsystem", 00:16:41.558 "trtype": "$TEST_TRANSPORT", 00:16:41.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.558 "adrfam": "ipv4", 00:16:41.558 "trsvcid": "$NVMF_PORT", 00:16:41.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.558 "hdgst": ${hdgst:-false}, 00:16:41.558 "ddgst": ${ddgst:-false} 00:16:41.558 }, 00:16:41.558 "method": "bdev_nvme_attach_controller" 00:16:41.558 } 00:16:41.558 EOF 00:16:41.558 )") 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.558 { 00:16:41.558 "params": { 00:16:41.558 "name": "Nvme$subsystem", 00:16:41.558 "trtype": "$TEST_TRANSPORT", 00:16:41.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.558 "adrfam": "ipv4", 00:16:41.558 "trsvcid": "$NVMF_PORT", 00:16:41.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.558 "hdgst": ${hdgst:-false}, 00:16:41.558 "ddgst": ${ddgst:-false} 00:16:41.558 }, 00:16:41.558 "method": "bdev_nvme_attach_controller" 00:16:41.558 } 00:16:41.558 EOF 00:16:41.558 )") 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.558 { 00:16:41.558 "params": { 00:16:41.558 "name": "Nvme$subsystem", 00:16:41.558 "trtype": "$TEST_TRANSPORT", 00:16:41.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.558 "adrfam": "ipv4", 00:16:41.558 "trsvcid": "$NVMF_PORT", 00:16:41.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.558 "hdgst": ${hdgst:-false}, 00:16:41.558 "ddgst": ${ddgst:-false} 00:16:41.558 }, 00:16:41.558 "method": "bdev_nvme_attach_controller" 00:16:41.558 } 00:16:41.558 EOF 00:16:41.558 )") 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.558 { 00:16:41.558 "params": { 00:16:41.558 "name": "Nvme$subsystem", 00:16:41.558 "trtype": "$TEST_TRANSPORT", 00:16:41.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.558 "adrfam": "ipv4", 00:16:41.558 "trsvcid": "$NVMF_PORT", 00:16:41.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.558 "hdgst": ${hdgst:-false}, 00:16:41.558 "ddgst": ${ddgst:-false} 00:16:41.558 }, 00:16:41.558 "method": "bdev_nvme_attach_controller" 00:16:41.558 } 00:16:41.558 EOF 00:16:41.558 )") 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.558 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.558 { 00:16:41.558 "params": { 00:16:41.558 "name": "Nvme$subsystem", 00:16:41.558 "trtype": "$TEST_TRANSPORT", 00:16:41.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.558 "adrfam": "ipv4", 00:16:41.558 "trsvcid": "$NVMF_PORT", 00:16:41.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.558 "hdgst": ${hdgst:-false}, 00:16:41.559 "ddgst": ${ddgst:-false} 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 } 00:16:41.559 EOF 00:16:41.559 )") 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.559 { 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme$subsystem", 00:16:41.559 "trtype": "$TEST_TRANSPORT", 00:16:41.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "$NVMF_PORT", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.559 "hdgst": ${hdgst:-false}, 00:16:41.559 "ddgst": ${ddgst:-false} 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 } 00:16:41.559 EOF 00:16:41.559 )") 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.559 { 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme$subsystem", 00:16:41.559 "trtype": "$TEST_TRANSPORT", 00:16:41.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "$NVMF_PORT", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.559 "hdgst": ${hdgst:-false}, 00:16:41.559 "ddgst": ${ddgst:-false} 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 } 00:16:41.559 EOF 00:16:41.559 )") 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.559 { 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme$subsystem", 00:16:41.559 "trtype": "$TEST_TRANSPORT", 00:16:41.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "$NVMF_PORT", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.559 "hdgst": ${hdgst:-false}, 00:16:41.559 "ddgst": ${ddgst:-false} 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 } 00:16:41.559 EOF 00:16:41.559 )") 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:41.559 { 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme$subsystem", 00:16:41.559 "trtype": "$TEST_TRANSPORT", 00:16:41.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "$NVMF_PORT", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:41.559 "hdgst": ${hdgst:-false}, 00:16:41.559 "ddgst": ${ddgst:-false} 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 } 00:16:41.559 EOF 00:16:41.559 )") 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:16:41.559 10:56:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme1", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:41.559 "hdgst": false, 00:16:41.559 "ddgst": false 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 },{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme2", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:41.559 "hdgst": false, 00:16:41.559 "ddgst": false 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 },{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme3", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:41.559 "hdgst": false, 00:16:41.559 "ddgst": false 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 },{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme4", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:41.559 "hdgst": false, 00:16:41.559 "ddgst": false 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 },{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme5", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:41.559 "hdgst": false, 00:16:41.559 "ddgst": false 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 },{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme6", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:41.559 "hdgst": false, 00:16:41.559 "ddgst": false 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 },{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme7", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:41.559 "hdgst": false, 00:16:41.559 "ddgst": false 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 },{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme8", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:41.559 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:41.559 "hdgst": false, 00:16:41.559 "ddgst": false 00:16:41.559 }, 00:16:41.559 "method": "bdev_nvme_attach_controller" 00:16:41.559 },{ 00:16:41.559 "params": { 00:16:41.559 "name": "Nvme9", 00:16:41.559 "trtype": "tcp", 00:16:41.559 "traddr": "10.0.0.2", 00:16:41.559 "adrfam": "ipv4", 00:16:41.559 "trsvcid": "4420", 00:16:41.559 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:41.560 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:41.560 "hdgst": false, 00:16:41.560 "ddgst": false 00:16:41.560 }, 00:16:41.560 "method": "bdev_nvme_attach_controller" 00:16:41.560 },{ 00:16:41.560 "params": { 00:16:41.560 "name": "Nvme10", 00:16:41.560 "trtype": "tcp", 00:16:41.560 "traddr": "10.0.0.2", 00:16:41.560 "adrfam": "ipv4", 00:16:41.560 "trsvcid": "4420", 00:16:41.560 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:41.560 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:41.560 "hdgst": false, 00:16:41.560 "ddgst": false 00:16:41.560 }, 00:16:41.560 "method": "bdev_nvme_attach_controller" 00:16:41.560 }' 00:16:41.560 [2024-05-15 10:56:57.608865] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:41.560 [2024-05-15 10:56:57.608988] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:41.560 EAL: No free 2048 kB hugepages reported on node 1 00:16:41.560 [2024-05-15 10:56:57.683435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:41.818 [2024-05-15 10:56:57.796322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # return 0 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2821810 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:16:43.744 10:56:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:16:44.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2821810 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2821620 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.679 "hdgst": ${hdgst:-false}, 00:16:44.679 "ddgst": ${ddgst:-false} 00:16:44.679 }, 00:16:44.679 "method": "bdev_nvme_attach_controller" 00:16:44.679 } 00:16:44.679 EOF 00:16:44.679 )") 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:44.679 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:44.679 { 00:16:44.679 "params": { 00:16:44.679 "name": "Nvme$subsystem", 00:16:44.679 "trtype": "$TEST_TRANSPORT", 00:16:44.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.679 "adrfam": "ipv4", 00:16:44.679 "trsvcid": "$NVMF_PORT", 00:16:44.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.680 "hdgst": ${hdgst:-false}, 00:16:44.680 "ddgst": ${ddgst:-false} 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 } 00:16:44.680 EOF 00:16:44.680 )") 00:16:44.680 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:16:44.680 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:16:44.680 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:16:44.680 10:57:00 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme1", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme2", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme3", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme4", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme5", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme6", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme7", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme8", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme9", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 },{ 00:16:44.680 "params": { 00:16:44.680 "name": "Nvme10", 00:16:44.680 "trtype": "tcp", 00:16:44.680 "traddr": "10.0.0.2", 00:16:44.680 "adrfam": "ipv4", 00:16:44.680 "trsvcid": "4420", 00:16:44.680 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:44.680 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:44.680 "hdgst": false, 00:16:44.680 "ddgst": false 00:16:44.680 }, 00:16:44.680 "method": "bdev_nvme_attach_controller" 00:16:44.680 }' 00:16:44.680 [2024-05-15 10:57:00.673689] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:44.680 [2024-05-15 10:57:00.673783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2822273 ] 00:16:44.680 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.680 [2024-05-15 10:57:00.750330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.680 [2024-05-15 10:57:00.862447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:46.580 Running I/O for 1 seconds... 00:16:47.514 00:16:47.514 Latency(us) 00:16:47.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:47.514 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme1n1 : 1.14 225.37 14.09 0.00 0.00 280858.74 22719.15 268746.15 00:16:47.514 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme2n1 : 1.17 219.12 13.69 0.00 0.00 284558.22 20583.16 284280.60 00:16:47.514 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme3n1 : 1.15 223.29 13.96 0.00 0.00 274694.45 22039.51 268746.15 00:16:47.514 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme4n1 : 1.19 215.50 13.47 0.00 0.00 280389.78 48156.82 274959.93 00:16:47.514 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme5n1 : 1.20 214.12 13.38 0.00 0.00 277575.68 39807.05 285834.05 00:16:47.514 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme6n1 : 1.12 170.77 10.67 0.00 0.00 340382.91 24563.86 299815.06 00:16:47.514 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme7n1 : 1.15 166.57 10.41 0.00 0.00 343908.76 22913.33 304475.40 00:16:47.514 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme8n1 : 1.19 161.09 10.07 0.00 0.00 350980.17 25437.68 416323.51 00:16:47.514 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme9n1 : 1.18 162.42 10.15 0.00 0.00 341652.92 43690.67 337097.77 00:16:47.514 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:47.514 Verification LBA range: start 0x0 length 0x400 00:16:47.514 Nvme10n1 : 1.20 213.27 13.33 0.00 0.00 256450.94 24078.41 271853.04 00:16:47.514 =================================================================================================================== 00:16:47.514 Total : 1971.51 123.22 0.00 0.00 298580.15 20583.16 416323.51 00:16:47.772 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:16:47.772 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:47.772 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:47.773 rmmod nvme_tcp 00:16:47.773 rmmod nvme_fabrics 00:16:47.773 rmmod nvme_keyring 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2821620 ']' 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2821620 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@946 -- # '[' -z 2821620 ']' 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # kill -0 2821620 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # uname 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2821620 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2821620' 00:16:47.773 killing process with pid 2821620 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@965 -- # kill 2821620 00:16:47.773 [2024-05-15 10:57:03.999801] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:47.773 10:57:03 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # wait 2821620 00:16:48.339 10:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.339 10:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.339 10:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.339 10:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.339 10:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.339 10:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.339 10:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.339 10:57:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.878 00:16:50.878 real 0m13.225s 00:16:50.878 user 0m38.092s 00:16:50.878 sys 0m3.651s 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:16:50.878 ************************************ 00:16:50.878 END TEST nvmf_shutdown_tc1 00:16:50.878 ************************************ 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:50.878 ************************************ 00:16:50.878 START TEST nvmf_shutdown_tc2 00:16:50.878 ************************************ 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc2 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.878 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:50.879 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:50.879 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:50.879 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:50.879 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:50.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:50.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:16:50.879 00:16:50.879 --- 10.0.0.2 ping statistics --- 00:16:50.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.879 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:50.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:50.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:16:50.879 00:16:50.879 --- 10.0.0.1 ping statistics --- 00:16:50.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:50.879 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:16:50.879 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2823162 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2823162 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2823162 ']' 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:50.880 10:57:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:50.880 [2024-05-15 10:57:06.866277] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:50.880 [2024-05-15 10:57:06.866355] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:50.880 EAL: No free 2048 kB hugepages reported on node 1 00:16:50.880 [2024-05-15 10:57:06.946890] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:50.880 [2024-05-15 10:57:07.064126] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:50.880 [2024-05-15 10:57:07.064182] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:50.880 [2024-05-15 10:57:07.064194] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:50.880 [2024-05-15 10:57:07.064205] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:50.880 [2024-05-15 10:57:07.064216] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:50.880 [2024-05-15 10:57:07.064309] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:50.880 [2024-05-15 10:57:07.064423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:50.880 [2024-05-15 10:57:07.064490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:50.880 [2024-05-15 10:57:07.064493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:51.139 [2024-05-15 10:57:07.230834] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.139 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:51.139 Malloc1 00:16:51.139 [2024-05-15 10:57:07.312028] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:51.139 [2024-05-15 10:57:07.312362] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:51.139 Malloc2 00:16:51.398 Malloc3 00:16:51.398 Malloc4 00:16:51.398 Malloc5 00:16:51.398 Malloc6 00:16:51.398 Malloc7 00:16:51.657 Malloc8 00:16:51.657 Malloc9 00:16:51.657 Malloc10 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2823452 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2823452 /var/tmp/bdevperf.sock 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2823452 ']' 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.657 { 00:16:51.657 "params": { 00:16:51.657 "name": "Nvme$subsystem", 00:16:51.657 "trtype": "$TEST_TRANSPORT", 00:16:51.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.657 "adrfam": "ipv4", 00:16:51.657 "trsvcid": "$NVMF_PORT", 00:16:51.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.657 "hdgst": ${hdgst:-false}, 00:16:51.657 "ddgst": ${ddgst:-false} 00:16:51.657 }, 00:16:51.657 "method": "bdev_nvme_attach_controller" 00:16:51.657 } 00:16:51.657 EOF 00:16:51.657 )") 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.657 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.657 { 00:16:51.657 "params": { 00:16:51.657 "name": "Nvme$subsystem", 00:16:51.657 "trtype": "$TEST_TRANSPORT", 00:16:51.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.657 "adrfam": "ipv4", 00:16:51.657 "trsvcid": "$NVMF_PORT", 00:16:51.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.657 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.658 { 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme$subsystem", 00:16:51.658 "trtype": "$TEST_TRANSPORT", 00:16:51.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "$NVMF_PORT", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.658 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.658 { 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme$subsystem", 00:16:51.658 "trtype": "$TEST_TRANSPORT", 00:16:51.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "$NVMF_PORT", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.658 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.658 { 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme$subsystem", 00:16:51.658 "trtype": "$TEST_TRANSPORT", 00:16:51.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "$NVMF_PORT", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.658 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.658 { 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme$subsystem", 00:16:51.658 "trtype": "$TEST_TRANSPORT", 00:16:51.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "$NVMF_PORT", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.658 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.658 { 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme$subsystem", 00:16:51.658 "trtype": "$TEST_TRANSPORT", 00:16:51.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "$NVMF_PORT", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.658 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.658 { 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme$subsystem", 00:16:51.658 "trtype": "$TEST_TRANSPORT", 00:16:51.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "$NVMF_PORT", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.658 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.658 { 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme$subsystem", 00:16:51.658 "trtype": "$TEST_TRANSPORT", 00:16:51.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "$NVMF_PORT", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.658 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:51.658 { 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme$subsystem", 00:16:51.658 "trtype": "$TEST_TRANSPORT", 00:16:51.658 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "$NVMF_PORT", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:51.658 "hdgst": ${hdgst:-false}, 00:16:51.658 "ddgst": ${ddgst:-false} 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 } 00:16:51.658 EOF 00:16:51.658 )") 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:16:51.658 10:57:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme1", 00:16:51.658 "trtype": "tcp", 00:16:51.658 "traddr": "10.0.0.2", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "4420", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:51.658 "hdgst": false, 00:16:51.658 "ddgst": false 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 },{ 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme2", 00:16:51.658 "trtype": "tcp", 00:16:51.658 "traddr": "10.0.0.2", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "4420", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:16:51.658 "hdgst": false, 00:16:51.658 "ddgst": false 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 },{ 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme3", 00:16:51.658 "trtype": "tcp", 00:16:51.658 "traddr": "10.0.0.2", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "4420", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:16:51.658 "hdgst": false, 00:16:51.658 "ddgst": false 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 },{ 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme4", 00:16:51.658 "trtype": "tcp", 00:16:51.658 "traddr": "10.0.0.2", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "4420", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:16:51.658 "hdgst": false, 00:16:51.658 "ddgst": false 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 },{ 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme5", 00:16:51.658 "trtype": "tcp", 00:16:51.658 "traddr": "10.0.0.2", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "4420", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:16:51.658 "hdgst": false, 00:16:51.658 "ddgst": false 00:16:51.658 }, 00:16:51.658 "method": "bdev_nvme_attach_controller" 00:16:51.658 },{ 00:16:51.658 "params": { 00:16:51.658 "name": "Nvme6", 00:16:51.658 "trtype": "tcp", 00:16:51.658 "traddr": "10.0.0.2", 00:16:51.658 "adrfam": "ipv4", 00:16:51.658 "trsvcid": "4420", 00:16:51.658 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:16:51.658 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:16:51.658 "hdgst": false, 00:16:51.659 "ddgst": false 00:16:51.659 }, 00:16:51.659 "method": "bdev_nvme_attach_controller" 00:16:51.659 },{ 00:16:51.659 "params": { 00:16:51.659 "name": "Nvme7", 00:16:51.659 "trtype": "tcp", 00:16:51.659 "traddr": "10.0.0.2", 00:16:51.659 "adrfam": "ipv4", 00:16:51.659 "trsvcid": "4420", 00:16:51.659 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:16:51.659 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:16:51.659 "hdgst": false, 00:16:51.659 "ddgst": false 00:16:51.659 }, 00:16:51.659 "method": "bdev_nvme_attach_controller" 00:16:51.659 },{ 00:16:51.659 "params": { 00:16:51.659 "name": "Nvme8", 00:16:51.659 "trtype": "tcp", 00:16:51.659 "traddr": "10.0.0.2", 00:16:51.659 "adrfam": "ipv4", 00:16:51.659 "trsvcid": "4420", 00:16:51.659 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:16:51.659 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:16:51.659 "hdgst": false, 00:16:51.659 "ddgst": false 00:16:51.659 }, 00:16:51.659 "method": "bdev_nvme_attach_controller" 00:16:51.659 },{ 00:16:51.659 "params": { 00:16:51.659 "name": "Nvme9", 00:16:51.659 "trtype": "tcp", 00:16:51.659 "traddr": "10.0.0.2", 00:16:51.659 "adrfam": "ipv4", 00:16:51.659 "trsvcid": "4420", 00:16:51.659 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:16:51.659 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:16:51.659 "hdgst": false, 00:16:51.659 "ddgst": false 00:16:51.659 }, 00:16:51.659 "method": "bdev_nvme_attach_controller" 00:16:51.659 },{ 00:16:51.659 "params": { 00:16:51.659 "name": "Nvme10", 00:16:51.659 "trtype": "tcp", 00:16:51.659 "traddr": "10.0.0.2", 00:16:51.659 "adrfam": "ipv4", 00:16:51.659 "trsvcid": "4420", 00:16:51.659 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:16:51.659 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:16:51.659 "hdgst": false, 00:16:51.659 "ddgst": false 00:16:51.659 }, 00:16:51.659 "method": "bdev_nvme_attach_controller" 00:16:51.659 }' 00:16:51.659 [2024-05-15 10:57:07.824908] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:51.659 [2024-05-15 10:57:07.825014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2823452 ] 00:16:51.659 EAL: No free 2048 kB hugepages reported on node 1 00:16:51.917 [2024-05-15 10:57:07.902136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.917 [2024-05-15 10:57:08.014417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.290 Running I/O for 10 seconds... 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # return 0 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:16:53.859 10:57:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:16:54.118 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:16:54.376 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:16:54.376 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:16:54.376 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:16:54.376 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2823452 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2823452 ']' 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2823452 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2823452 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2823452' 00:16:54.377 killing process with pid 2823452 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2823452 00:16:54.377 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2823452 00:16:54.377 Received shutdown signal, test time was about 1.107697 seconds 00:16:54.377 00:16:54.377 Latency(us) 00:16:54.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.377 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme1n1 : 1.03 186.42 11.65 0.00 0.00 339581.16 26020.22 278066.82 00:16:54.377 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme2n1 : 1.05 243.07 15.19 0.00 0.00 255840.71 21845.33 267192.70 00:16:54.377 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme3n1 : 1.06 185.06 11.57 0.00 0.00 321856.92 10291.58 278066.82 00:16:54.377 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme4n1 : 1.08 237.44 14.84 0.00 0.00 252992.09 25437.68 270299.59 00:16:54.377 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme5n1 : 1.06 180.85 11.30 0.00 0.00 326152.34 26020.22 302921.96 00:16:54.377 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme6n1 : 1.07 238.59 14.91 0.00 0.00 241730.75 21359.88 267192.70 00:16:54.377 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme7n1 : 1.08 236.38 14.77 0.00 0.00 240678.68 23495.87 268746.15 00:16:54.377 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme8n1 : 1.09 235.80 14.74 0.00 0.00 237137.92 22622.06 274959.93 00:16:54.377 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme9n1 : 1.11 176.17 11.01 0.00 0.00 297793.59 36117.62 309135.74 00:16:54.377 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.377 Verification LBA range: start 0x0 length 0x400 00:16:54.377 Nvme10n1 : 1.10 233.00 14.56 0.00 0.00 231612.11 27767.85 251658.24 00:16:54.377 =================================================================================================================== 00:16:54.377 Total : 2152.80 134.55 0.00 0.00 269487.20 10291.58 309135.74 00:16:54.943 10:57:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2823162 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:55.880 rmmod nvme_tcp 00:16:55.880 rmmod nvme_fabrics 00:16:55.880 rmmod nvme_keyring 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2823162 ']' 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2823162 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@946 -- # '[' -z 2823162 ']' 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # kill -0 2823162 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # uname 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2823162 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2823162' 00:16:55.880 killing process with pid 2823162 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@965 -- # kill 2823162 00:16:55.880 [2024-05-15 10:57:11.957398] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:55.880 10:57:11 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # wait 2823162 00:16:56.447 10:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:56.447 10:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:56.447 10:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:56.447 10:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:56.447 10:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:56.447 10:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.447 10:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:56.447 10:57:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.353 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:58.353 00:16:58.353 real 0m7.916s 00:16:58.353 user 0m23.589s 00:16:58.353 sys 0m1.677s 00:16:58.353 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:16:58.353 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:16:58.353 ************************************ 00:16:58.353 END TEST nvmf_shutdown_tc2 00:16:58.353 ************************************ 00:16:58.353 10:57:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1103 -- # xtrace_disable 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:16:58.612 ************************************ 00:16:58.612 START TEST nvmf_shutdown_tc3 00:16:58.612 ************************************ 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1121 -- # nvmf_shutdown_tc3 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:58.612 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:58.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:58.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:58.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:58.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:16:58.613 00:16:58.613 --- 10.0.0.2 ping statistics --- 00:16:58.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.613 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:16:58.613 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.614 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:16:58.614 00:16:58.614 --- 10.0.0.1 ping statistics --- 00:16:58.614 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.614 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2824824 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2824824 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2824824 ']' 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:16:58.614 10:57:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:58.614 [2024-05-15 10:57:14.827202] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:16:58.614 [2024-05-15 10:57:14.827298] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.873 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.873 [2024-05-15 10:57:14.903621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:58.873 [2024-05-15 10:57:15.014804] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.873 [2024-05-15 10:57:15.014859] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.873 [2024-05-15 10:57:15.014888] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.873 [2024-05-15 10:57:15.014900] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.873 [2024-05-15 10:57:15.014910] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.873 [2024-05-15 10:57:15.015055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:58.873 [2024-05-15 10:57:15.015120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:58.873 [2024-05-15 10:57:15.015172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:58.873 [2024-05-15 10:57:15.015174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:59.857 [2024-05-15 10:57:15.842028] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@720 -- # xtrace_disable 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.857 10:57:15 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:16:59.857 Malloc1 00:16:59.857 [2024-05-15 10:57:15.917695] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:59.857 [2024-05-15 10:57:15.918009] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.857 Malloc2 00:16:59.857 Malloc3 00:16:59.857 Malloc4 00:16:59.857 Malloc5 00:17:00.115 Malloc6 00:17:00.116 Malloc7 00:17:00.116 Malloc8 00:17:00.116 Malloc9 00:17:00.116 Malloc10 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2825030 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2825030 /var/tmp/bdevperf.sock 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@827 -- # '[' -z 2825030 ']' 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.374 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.374 { 00:17:00.374 "params": { 00:17:00.374 "name": "Nvme$subsystem", 00:17:00.374 "trtype": "$TEST_TRANSPORT", 00:17:00.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.374 "adrfam": "ipv4", 00:17:00.374 "trsvcid": "$NVMF_PORT", 00:17:00.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.374 "hdgst": ${hdgst:-false}, 00:17:00.374 "ddgst": ${ddgst:-false} 00:17:00.374 }, 00:17:00.374 "method": "bdev_nvme_attach_controller" 00:17:00.374 } 00:17:00.374 EOF 00:17:00.374 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:00.375 { 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme$subsystem", 00:17:00.375 "trtype": "$TEST_TRANSPORT", 00:17:00.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "$NVMF_PORT", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:00.375 "hdgst": ${hdgst:-false}, 00:17:00.375 "ddgst": ${ddgst:-false} 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 } 00:17:00.375 EOF 00:17:00.375 )") 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:17:00.375 10:57:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme1", 00:17:00.375 "trtype": "tcp", 00:17:00.375 "traddr": "10.0.0.2", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "4420", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:00.375 "hdgst": false, 00:17:00.375 "ddgst": false 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 },{ 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme2", 00:17:00.375 "trtype": "tcp", 00:17:00.375 "traddr": "10.0.0.2", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "4420", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:00.375 "hdgst": false, 00:17:00.375 "ddgst": false 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 },{ 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme3", 00:17:00.375 "trtype": "tcp", 00:17:00.375 "traddr": "10.0.0.2", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "4420", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:17:00.375 "hdgst": false, 00:17:00.375 "ddgst": false 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 },{ 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme4", 00:17:00.375 "trtype": "tcp", 00:17:00.375 "traddr": "10.0.0.2", 00:17:00.375 "adrfam": "ipv4", 00:17:00.375 "trsvcid": "4420", 00:17:00.375 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:17:00.375 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:17:00.375 "hdgst": false, 00:17:00.375 "ddgst": false 00:17:00.375 }, 00:17:00.375 "method": "bdev_nvme_attach_controller" 00:17:00.375 },{ 00:17:00.375 "params": { 00:17:00.375 "name": "Nvme5", 00:17:00.376 "trtype": "tcp", 00:17:00.376 "traddr": "10.0.0.2", 00:17:00.376 "adrfam": "ipv4", 00:17:00.376 "trsvcid": "4420", 00:17:00.376 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:17:00.376 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:17:00.376 "hdgst": false, 00:17:00.376 "ddgst": false 00:17:00.376 }, 00:17:00.376 "method": "bdev_nvme_attach_controller" 00:17:00.376 },{ 00:17:00.376 "params": { 00:17:00.376 "name": "Nvme6", 00:17:00.376 "trtype": "tcp", 00:17:00.376 "traddr": "10.0.0.2", 00:17:00.376 "adrfam": "ipv4", 00:17:00.376 "trsvcid": "4420", 00:17:00.376 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:17:00.376 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:17:00.376 "hdgst": false, 00:17:00.376 "ddgst": false 00:17:00.376 }, 00:17:00.376 "method": "bdev_nvme_attach_controller" 00:17:00.376 },{ 00:17:00.376 "params": { 00:17:00.376 "name": "Nvme7", 00:17:00.376 "trtype": "tcp", 00:17:00.376 "traddr": "10.0.0.2", 00:17:00.376 "adrfam": "ipv4", 00:17:00.376 "trsvcid": "4420", 00:17:00.376 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:17:00.376 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:17:00.376 "hdgst": false, 00:17:00.376 "ddgst": false 00:17:00.376 }, 00:17:00.376 "method": "bdev_nvme_attach_controller" 00:17:00.376 },{ 00:17:00.376 "params": { 00:17:00.376 "name": "Nvme8", 00:17:00.376 "trtype": "tcp", 00:17:00.376 "traddr": "10.0.0.2", 00:17:00.376 "adrfam": "ipv4", 00:17:00.376 "trsvcid": "4420", 00:17:00.376 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:17:00.376 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:17:00.376 "hdgst": false, 00:17:00.376 "ddgst": false 00:17:00.376 }, 00:17:00.376 "method": "bdev_nvme_attach_controller" 00:17:00.376 },{ 00:17:00.376 "params": { 00:17:00.376 "name": "Nvme9", 00:17:00.376 "trtype": "tcp", 00:17:00.376 "traddr": "10.0.0.2", 00:17:00.376 "adrfam": "ipv4", 00:17:00.376 "trsvcid": "4420", 00:17:00.376 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:17:00.376 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:17:00.376 "hdgst": false, 00:17:00.376 "ddgst": false 00:17:00.376 }, 00:17:00.376 "method": "bdev_nvme_attach_controller" 00:17:00.376 },{ 00:17:00.376 "params": { 00:17:00.376 "name": "Nvme10", 00:17:00.376 "trtype": "tcp", 00:17:00.376 "traddr": "10.0.0.2", 00:17:00.376 "adrfam": "ipv4", 00:17:00.376 "trsvcid": "4420", 00:17:00.376 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:17:00.376 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:17:00.376 "hdgst": false, 00:17:00.376 "ddgst": false 00:17:00.376 }, 00:17:00.376 "method": "bdev_nvme_attach_controller" 00:17:00.376 }' 00:17:00.376 [2024-05-15 10:57:16.414449] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:00.376 [2024-05-15 10:57:16.414532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825030 ] 00:17:00.376 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.376 [2024-05-15 10:57:16.488009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.376 [2024-05-15 10:57:16.597904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.284 Running I/O for 10 seconds... 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # return 0 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:17:02.284 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:02.285 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:02.546 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.546 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:17:02.546 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:17:02.546 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:17:02.805 10:57:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2824824 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@946 -- # '[' -z 2824824 ']' 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # kill -0 2824824 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # uname 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2824824 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2824824' 00:17:03.079 killing process with pid 2824824 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@965 -- # kill 2824824 00:17:03.079 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # wait 2824824 00:17:03.079 [2024-05-15 10:57:19.152649] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:03.079 [2024-05-15 10:57:19.155166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155244] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155317] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155329] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155341] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155404] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155450] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155489] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155501] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155513] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155553] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155614] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155663] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155773] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155826] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155838] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.079 [2024-05-15 10:57:19.155850] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.155863] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.155883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.155904] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.155952] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.155975] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.155996] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156016] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156086] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156107] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156168] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156194] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.156221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1e930 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.158768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.080 [2024-05-15 10:57:19.158818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.080 [2024-05-15 10:57:19.158859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.080 [2024-05-15 10:57:19.158884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.080 [2024-05-15 10:57:19.158909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.080 [2024-05-15 10:57:19.158944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.080 [2024-05-15 10:57:19.158971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.080 [2024-05-15 10:57:19.158994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.080 [2024-05-15 10:57:19.159018] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfea7c0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163095] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163131] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163159] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163171] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163211] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163271] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163321] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163398] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163438] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163451] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163463] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163488] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163502] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163527] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163540] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163552] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163580] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163592] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163604] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163618] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163718] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163731] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163759] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163786] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163798] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163875] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163887] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163900] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163912] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.080 [2024-05-15 10:57:19.163943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.163957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.163970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b212d0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.165127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1edd0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.165155] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1edd0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.165169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1edd0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.165181] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1edd0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.165193] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1edd0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167629] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167641] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167654] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167679] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167729] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167741] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167753] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167777] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167801] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167813] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167849] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167861] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167908] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167937] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.167994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168034] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168046] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168058] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168082] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168094] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168130] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168142] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168178] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168202] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168214] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168235] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168247] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168259] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168272] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168333] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168357] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168385] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.168397] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f710 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169237] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169262] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169274] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169286] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169299] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169311] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169335] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.081 [2024-05-15 10:57:19.169359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169372] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169384] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169408] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169444] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169456] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169499] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169511] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169523] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169535] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169547] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169595] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169607] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169619] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169631] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169680] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169716] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169742] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169754] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169767] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169779] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169856] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169881] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169893] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169905] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169925] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169947] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169960] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.169972] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1fbb0 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.170971] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.170999] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171013] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171027] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171039] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171068] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171108] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171133] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171148] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171217] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171229] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171258] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171270] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171294] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171308] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171320] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171332] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171345] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171395] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171409] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171421] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171433] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171445] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.082 [2024-05-15 10:57:19.171461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171512] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171525] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171559] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171572] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171584] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171596] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171636] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171648] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171713] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171726] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171738] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171751] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171765] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171778] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171789] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.171803] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20050 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173089] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173121] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173135] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173147] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173160] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173172] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173243] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173304] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173328] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173353] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173365] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173413] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173425] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173449] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173473] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173485] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173497] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173550] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173562] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173574] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173587] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173599] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173611] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173647] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.083 [2024-05-15 10:57:19.173660] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173672] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173684] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173696] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173708] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173721] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173733] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173745] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173783] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173795] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173807] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173820] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173832] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173844] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.173885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b204f0 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175143] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175170] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175186] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175221] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175234] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175248] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175273] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175284] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175322] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175334] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175346] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175358] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175370] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175393] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175405] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175465] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175482] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175507] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175519] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175543] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175567] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175579] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175591] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175602] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175615] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175627] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175639] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175651] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175723] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175758] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175770] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175781] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175793] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175809] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175821] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175833] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175845] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175857] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175869] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175882] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175895] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175924] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.175943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b20e30 is same with the state(5) to be set 00:17:03.084 [2024-05-15 10:57:19.177385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-05-15 10:57:19.177420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.084 [2024-05-15 10:57:19.177449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-05-15 10:57:19.177465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.084 [2024-05-15 10:57:19.177482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.084 [2024-05-15 10:57:19.177496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.177973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.177989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.085 [2024-05-15 10:57:19.178729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.085 [2024-05-15 10:57:19.178744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.178759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.178774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.178789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.178808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.178824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.178839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.178854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.178870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.178884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.178900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.178924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.178948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.178963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.178979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.178993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.086 [2024-05-15 10:57:19.179390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.179433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:17:03.086 [2024-05-15 10:57:19.179513] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfd9e80 was disconnected and freed. reset controller. 00:17:03.086 [2024-05-15 10:57:19.180453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180640] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5230 is same with the state(5) to be set 00:17:03.086 [2024-05-15 10:57:19.180694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180816] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1039d30 is same with the state(5) to be set 00:17:03.086 [2024-05-15 10:57:19.180864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.180978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.180991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.181004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b4d00 is same with the state(5) to be set 00:17:03.086 [2024-05-15 10:57:19.181049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.181069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.181084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.181098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.181112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.181126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.181140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.181153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.181166] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e6b0 is same with the state(5) to be set 00:17:03.086 [2024-05-15 10:57:19.181225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.181246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.086 [2024-05-15 10:57:19.181260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.086 [2024-05-15 10:57:19.181274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118f960 is same with the state(5) to be set 00:17:03.087 [2024-05-15 10:57:19.181387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181478] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb45730 is same with the state(5) to be set 00:17:03.087 [2024-05-15 10:57:19.181533] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfea7c0 (9): Bad file descriptor 00:17:03.087 [2024-05-15 10:57:19.181582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181704] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1015d10 is same with the state(5) to be set 00:17:03.087 [2024-05-15 10:57:19.181744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118dfb0 is same with the state(5) to be set 00:17:03.087 [2024-05-15 10:57:19.181905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.181978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.181992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.182006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:03.087 [2024-05-15 10:57:19.182019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.182032] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1034f20 is same with the state(5) to be set 00:17:03.087 [2024-05-15 10:57:19.185598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.185650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.185690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.185718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.185747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.185780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.185809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.185835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.185863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.185888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.185924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.185960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.185989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.087 [2024-05-15 10:57:19.186415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.087 [2024-05-15 10:57:19.186429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.186980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.186996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.088 [2024-05-15 10:57:19.187531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.088 [2024-05-15 10:57:19.187548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.187865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.187989] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ab6bd0 was disconnected and freed. reset controller. 00:17:03.089 [2024-05-15 10:57:19.188175] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:03.089 [2024-05-15 10:57:19.188240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1015d10 (9): Bad file descriptor 00:17:03.089 [2024-05-15 10:57:19.188380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.188978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.188994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.089 [2024-05-15 10:57:19.189266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.089 [2024-05-15 10:57:19.189285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.189970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.189985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.090 [2024-05-15 10:57:19.190392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.090 [2024-05-15 10:57:19.190475] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x123fe70 was disconnected and freed. reset controller. 00:17:03.090 [2024-05-15 10:57:19.192223] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a5230 (9): Bad file descriptor 00:17:03.090 [2024-05-15 10:57:19.192262] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1039d30 (9): Bad file descriptor 00:17:03.090 [2024-05-15 10:57:19.192301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b4d00 (9): Bad file descriptor 00:17:03.090 [2024-05-15 10:57:19.192327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100e6b0 (9): Bad file descriptor 00:17:03.090 [2024-05-15 10:57:19.192356] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118f960 (9): Bad file descriptor 00:17:03.090 [2024-05-15 10:57:19.192387] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb45730 (9): Bad file descriptor 00:17:03.090 [2024-05-15 10:57:19.192427] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118dfb0 (9): Bad file descriptor 00:17:03.090 [2024-05-15 10:57:19.192458] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1034f20 (9): Bad file descriptor 00:17:03.090 [2024-05-15 10:57:19.192532] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:03.090 [2024-05-15 10:57:19.194368] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:03.091 [2024-05-15 10:57:19.194412] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:03.091 [2024-05-15 10:57:19.194696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.091 [2024-05-15 10:57:19.194729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1015d10 with addr=10.0.0.2, port=4420 00:17:03.091 [2024-05-15 10:57:19.194747] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1015d10 is same with the state(5) to be set 00:17:03.091 [2024-05-15 10:57:19.194823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.194847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.194872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.194888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.194905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.194923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.194949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.194965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.194981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.195983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.195998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.091 [2024-05-15 10:57:19.196013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.091 [2024-05-15 10:57:19.196028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.092 [2024-05-15 10:57:19.196808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.092 [2024-05-15 10:57:19.196822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(5) to be set 00:17:03.092 [2024-05-15 10:57:19.198121] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:03.092 [2024-05-15 10:57:19.198202] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:03.092 [2024-05-15 10:57:19.198271] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:03.092 [2024-05-15 10:57:19.198341] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:03.092 [2024-05-15 10:57:19.198738] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:03.092 [2024-05-15 10:57:19.198810] nvme_tcp.c:1218:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:17:03.092 [2024-05-15 10:57:19.198841] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:03.092 [2024-05-15 10:57:19.199071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.092 [2024-05-15 10:57:19.199103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b4d00 with addr=10.0.0.2, port=4420 00:17:03.092 [2024-05-15 10:57:19.199120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b4d00 is same with the state(5) to be set 00:17:03.092 [2024-05-15 10:57:19.199303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.092 [2024-05-15 10:57:19.199330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a5230 with addr=10.0.0.2, port=4420 00:17:03.092 [2024-05-15 10:57:19.199346] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5230 is same with the state(5) to be set 00:17:03.092 [2024-05-15 10:57:19.199368] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1015d10 (9): Bad file descriptor 00:17:03.092 [2024-05-15 10:57:19.200154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.092 [2024-05-15 10:57:19.200184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfea7c0 with addr=10.0.0.2, port=4420 00:17:03.092 [2024-05-15 10:57:19.200201] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfea7c0 is same with the state(5) to be set 00:17:03.092 [2024-05-15 10:57:19.200220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b4d00 (9): Bad file descriptor 00:17:03.092 [2024-05-15 10:57:19.200240] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a5230 (9): Bad file descriptor 00:17:03.093 [2024-05-15 10:57:19.200257] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:03.093 [2024-05-15 10:57:19.200270] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:03.093 [2024-05-15 10:57:19.200287] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:03.093 [2024-05-15 10:57:19.200620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.093 [2024-05-15 10:57:19.200647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfea7c0 (9): Bad file descriptor 00:17:03.093 [2024-05-15 10:57:19.200666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:03.093 [2024-05-15 10:57:19.200679] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:03.093 [2024-05-15 10:57:19.200692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:03.093 [2024-05-15 10:57:19.200711] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:17:03.093 [2024-05-15 10:57:19.200725] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:17:03.093 [2024-05-15 10:57:19.200738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:03.093 [2024-05-15 10:57:19.200808] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.093 [2024-05-15 10:57:19.200828] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.093 [2024-05-15 10:57:19.200841] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:03.093 [2024-05-15 10:57:19.200853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:03.093 [2024-05-15 10:57:19.200866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.093 [2024-05-15 10:57:19.200928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.093 [2024-05-15 10:57:19.202348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.202971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.202985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.093 [2024-05-15 10:57:19.203350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.093 [2024-05-15 10:57:19.203366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.203972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.203988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.204336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.204351] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfe6220 is same with the state(5) to be set 00:17:03.094 [2024-05-15 10:57:19.205605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.205629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.205649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.205665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.205681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.205696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.205712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.205726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.205741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.205756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.205771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.205785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.205801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.205816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.094 [2024-05-15 10:57:19.205832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.094 [2024-05-15 10:57:19.205847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.205863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.205877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.205893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.205908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.205923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.205945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.205963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.205982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.205999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.206973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.206989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.207003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.207019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.207033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.207049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.207064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.207080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.207095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.207110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.095 [2024-05-15 10:57:19.207125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.095 [2024-05-15 10:57:19.207141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.207644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.207658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1137300 is same with the state(5) to be set 00:17:03.096 [2024-05-15 10:57:19.208912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.208953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.208974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.208990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.096 [2024-05-15 10:57:19.209434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.096 [2024-05-15 10:57:19.209450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.209976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.209991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.097 [2024-05-15 10:57:19.210665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.097 [2024-05-15 10:57:19.210679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.210709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.210739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.210768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.210798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.210832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.210863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.210893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.210925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.210946] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1138800 is same with the state(5) to be set 00:17:03.098 [2024-05-15 10:57:19.212207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.212983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.212998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.213013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.213028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.213044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.213058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.213074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.213088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.213104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.213119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.213135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.213149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.213166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.213181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.213196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.098 [2024-05-15 10:57:19.213210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.098 [2024-05-15 10:57:19.213235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.213977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.213993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.214259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.214274] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1139d00 is same with the state(5) to be set 00:17:03.099 [2024-05-15 10:57:19.215532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.215556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.215578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.215593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.215610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.099 [2024-05-15 10:57:19.215625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.099 [2024-05-15 10:57:19.215640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.215983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.215998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.100 [2024-05-15 10:57:19.216922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.100 [2024-05-15 10:57:19.216946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.216961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.216977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.216991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.217562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.217576] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122bca0 is same with the state(5) to be set 00:17:03.101 [2024-05-15 10:57:19.219941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.219967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.219992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.220008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.220024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.101 [2024-05-15 10:57:19.220038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.101 [2024-05-15 10:57:19.220055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.220978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.220993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.102 [2024-05-15 10:57:19.221328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.102 [2024-05-15 10:57:19.221344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.221976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.103 [2024-05-15 10:57:19.221990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:03.103 [2024-05-15 10:57:19.222005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x122d190 is same with the state(5) to be set 00:17:03.103 [2024-05-15 10:57:19.223790] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:17:03.103 [2024-05-15 10:57:19.223822] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:17:03.103 [2024-05-15 10:57:19.223840] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:17:03.103 [2024-05-15 10:57:19.223968] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:03.103 [2024-05-15 10:57:19.224001] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:03.103 [2024-05-15 10:57:19.224022] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:03.103 [2024-05-15 10:57:19.224049] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:03.103 [2024-05-15 10:57:19.224143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:17:03.103 [2024-05-15 10:57:19.224167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:17:03.103 task offset: 29312 on job bdev=Nvme3n1 fails 00:17:03.103 00:17:03.103 Latency(us) 00:17:03.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.103 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme1n1 ended in about 0.89 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme1n1 : 0.89 144.14 9.01 72.07 0.00 292702.94 22622.06 267192.70 00:17:03.103 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme2n1 ended in about 0.88 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme2n1 : 0.88 144.84 9.05 72.42 0.00 285075.78 8738.13 335544.32 00:17:03.103 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme3n1 ended in about 0.88 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme3n1 : 0.88 219.36 13.71 73.12 0.00 207063.42 6990.51 233016.89 00:17:03.103 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme4n1 ended in about 0.90 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme4n1 : 0.90 214.39 13.40 71.46 0.00 207581.87 25631.86 242337.56 00:17:03.103 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme5n1 ended in about 0.90 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme5n1 : 0.90 142.40 8.90 71.20 0.00 271890.96 24855.13 270299.59 00:17:03.103 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme6n1 ended in about 0.90 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme6n1 : 0.90 70.94 4.43 70.94 0.00 400183.75 44661.57 352632.23 00:17:03.103 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme7n1 ended in about 0.91 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme7n1 : 0.91 141.37 8.84 70.68 0.00 261744.77 24078.41 260978.92 00:17:03.103 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme8n1 ended in about 0.88 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme8n1 : 0.88 145.11 9.07 72.55 0.00 247972.47 10922.67 301368.51 00:17:03.103 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme9n1 ended in about 0.91 seconds with error 00:17:03.103 Verification LBA range: start 0x0 length 0x400 00:17:03.103 Nvme9n1 : 0.91 140.85 8.80 70.43 0.00 250925.26 21262.79 267192.70 00:17:03.103 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:17:03.103 Job: Nvme10n1 ended in about 0.91 seconds with error 00:17:03.104 Verification LBA range: start 0x0 length 0x400 00:17:03.104 Nvme10n1 : 0.91 85.42 5.34 70.09 0.00 333287.99 33204.91 369720.13 00:17:03.104 =================================================================================================================== 00:17:03.104 Total : 1448.81 90.55 714.96 0.00 265708.14 6990.51 369720.13 00:17:03.104 [2024-05-15 10:57:19.251866] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:03.104 [2024-05-15 10:57:19.251965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:17:03.104 [2024-05-15 10:57:19.252014] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:17:03.104 [2024-05-15 10:57:19.252446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.252489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100e6b0 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.252509] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100e6b0 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.252700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.252726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb45730 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.252743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb45730 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.252924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.252960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118f960 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.252977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118f960 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.254652] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:17:03.104 [2024-05-15 10:57:19.254682] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:17:03.104 [2024-05-15 10:57:19.254959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.254988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x118dfb0 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.255004] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x118dfb0 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.255188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.255214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1034f20 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.255230] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1034f20 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.255401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.255434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1039d30 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.255450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1039d30 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.255653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.255679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1015d10 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.255694] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1015d10 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.255720] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100e6b0 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.255744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb45730 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.255763] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118f960 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.255823] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:03.104 [2024-05-15 10:57:19.255850] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:03.104 [2024-05-15 10:57:19.255883] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:03.104 [2024-05-15 10:57:19.255909] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:17:03.104 [2024-05-15 10:57:19.256018] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:03.104 [2024-05-15 10:57:19.256234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.256264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11a5230 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.256280] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a5230 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.256475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.256501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11b4d00 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.256517] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11b4d00 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.256535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x118dfb0 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.256554] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1034f20 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.256572] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1039d30 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.256590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1015d10 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.256612] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.256625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.256642] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:17:03.104 [2024-05-15 10:57:19.256662] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.256676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.256689] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:17:03.104 [2024-05-15 10:57:19.256705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.256718] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.256731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:17:03.104 [2024-05-15 10:57:19.256822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.256842] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.256854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.257022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:17:03.104 [2024-05-15 10:57:19.257049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfea7c0 with addr=10.0.0.2, port=4420 00:17:03.104 [2024-05-15 10:57:19.257064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfea7c0 is same with the state(5) to be set 00:17:03.104 [2024-05-15 10:57:19.257083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a5230 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.257101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11b4d00 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.257117] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.257135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.257149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:17:03.104 [2024-05-15 10:57:19.257167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.257182] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.257194] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:17:03.104 [2024-05-15 10:57:19.257210] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.257224] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.257247] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:17:03.104 [2024-05-15 10:57:19.257263] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.257276] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.257289] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:17:03.104 [2024-05-15 10:57:19.257337] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.257354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.257366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.257378] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.257394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfea7c0 (9): Bad file descriptor 00:17:03.104 [2024-05-15 10:57:19.257410] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.257423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.257436] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:17:03.104 [2024-05-15 10:57:19.257452] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.257466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.257479] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:03.104 [2024-05-15 10:57:19.257516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.257532] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.104 [2024-05-15 10:57:19.257544] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:03.104 [2024-05-15 10:57:19.257557] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:17:03.104 [2024-05-15 10:57:19.257570] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.104 [2024-05-15 10:57:19.257604] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:17:03.673 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:17:03.673 10:57:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:17:04.611 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2825030 00:17:04.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2825030) - No such process 00:17:04.611 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:17:04.611 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:17:04.611 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:17:04.611 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:17:04.611 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:04.612 rmmod nvme_tcp 00:17:04.612 rmmod nvme_fabrics 00:17:04.612 rmmod nvme_keyring 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:04.612 10:57:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.147 10:57:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:07.147 00:17:07.147 real 0m8.201s 00:17:07.147 user 0m21.203s 00:17:07.147 sys 0m1.547s 00:17:07.147 10:57:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:07.147 10:57:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:17:07.147 ************************************ 00:17:07.147 END TEST nvmf_shutdown_tc3 00:17:07.147 ************************************ 00:17:07.147 10:57:22 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:17:07.147 00:17:07.147 real 0m29.568s 00:17:07.147 user 1m22.971s 00:17:07.147 sys 0m7.024s 00:17:07.147 10:57:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:07.147 10:57:22 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:17:07.147 ************************************ 00:17:07.147 END TEST nvmf_shutdown 00:17:07.147 ************************************ 00:17:07.147 10:57:22 nvmf_tcp -- nvmf/nvmf.sh@84 -- # timing_exit target 00:17:07.147 10:57:22 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:07.147 10:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.147 10:57:22 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_enter host 00:17:07.147 10:57:22 nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:07.147 10:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.147 10:57:22 nvmf_tcp -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:17:07.147 10:57:22 nvmf_tcp -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:07.147 10:57:22 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:07.147 10:57:22 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:07.147 10:57:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.147 ************************************ 00:17:07.147 START TEST nvmf_multicontroller 00:17:07.147 ************************************ 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:17:07.147 * Looking for test storage... 00:17:07.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:17:07.147 10:57:22 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:09.677 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:09.677 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:09.677 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:09.677 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:09.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:17:09.677 00:17:09.677 --- 10.0.0.2 ping statistics --- 00:17:09.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.677 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:17:09.677 00:17:09.677 --- 10.0.0.1 ping statistics --- 00:17:09.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.677 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.677 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2827846 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2827846 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2827846 ']' 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:09.678 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.678 [2024-05-15 10:57:25.663504] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:09.678 [2024-05-15 10:57:25.663595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.678 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.678 [2024-05-15 10:57:25.739309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:09.678 [2024-05-15 10:57:25.844999] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.678 [2024-05-15 10:57:25.845069] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.678 [2024-05-15 10:57:25.845104] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.678 [2024-05-15 10:57:25.845116] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.678 [2024-05-15 10:57:25.845126] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.678 [2024-05-15 10:57:25.845209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.678 [2024-05-15 10:57:25.845272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.678 [2024-05-15 10:57:25.845275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 [2024-05-15 10:57:25.979852] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 10:57:25 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 Malloc0 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 [2024-05-15 10:57:26.038436] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:09.936 [2024-05-15 10:57:26.038711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 [2024-05-15 10:57:26.046623] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 Malloc1 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.936 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2827961 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2827961 /var/tmp/bdevperf.sock 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@827 -- # '[' -z 2827961 ']' 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:09.937 10:57:26 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@860 -- # return 0 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 NVMe0n1 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.310 1 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 request: 00:17:11.310 { 00:17:11.310 "name": "NVMe0", 00:17:11.310 "trtype": "tcp", 00:17:11.310 "traddr": "10.0.0.2", 00:17:11.310 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:17:11.310 "hostaddr": "10.0.0.2", 00:17:11.310 "hostsvcid": "60000", 00:17:11.310 "adrfam": "ipv4", 00:17:11.310 "trsvcid": "4420", 00:17:11.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.310 "method": "bdev_nvme_attach_controller", 00:17:11.310 "req_id": 1 00:17:11.310 } 00:17:11.310 Got JSON-RPC error response 00:17:11.310 response: 00:17:11.310 { 00:17:11.310 "code": -114, 00:17:11.310 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:11.310 } 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.310 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.310 request: 00:17:11.310 { 00:17:11.310 "name": "NVMe0", 00:17:11.310 "trtype": "tcp", 00:17:11.310 "traddr": "10.0.0.2", 00:17:11.310 "hostaddr": "10.0.0.2", 00:17:11.310 "hostsvcid": "60000", 00:17:11.310 "adrfam": "ipv4", 00:17:11.310 "trsvcid": "4420", 00:17:11.310 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:11.310 "method": "bdev_nvme_attach_controller", 00:17:11.310 "req_id": 1 00:17:11.310 } 00:17:11.310 Got JSON-RPC error response 00:17:11.310 response: 00:17:11.311 { 00:17:11.311 "code": -114, 00:17:11.311 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:11.311 } 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.311 request: 00:17:11.311 { 00:17:11.311 "name": "NVMe0", 00:17:11.311 "trtype": "tcp", 00:17:11.311 "traddr": "10.0.0.2", 00:17:11.311 "hostaddr": "10.0.0.2", 00:17:11.311 "hostsvcid": "60000", 00:17:11.311 "adrfam": "ipv4", 00:17:11.311 "trsvcid": "4420", 00:17:11.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.311 "multipath": "disable", 00:17:11.311 "method": "bdev_nvme_attach_controller", 00:17:11.311 "req_id": 1 00:17:11.311 } 00:17:11.311 Got JSON-RPC error response 00:17:11.311 response: 00:17:11.311 { 00:17:11.311 "code": -114, 00:17:11.311 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:17:11.311 } 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.311 request: 00:17:11.311 { 00:17:11.311 "name": "NVMe0", 00:17:11.311 "trtype": "tcp", 00:17:11.311 "traddr": "10.0.0.2", 00:17:11.311 "hostaddr": "10.0.0.2", 00:17:11.311 "hostsvcid": "60000", 00:17:11.311 "adrfam": "ipv4", 00:17:11.311 "trsvcid": "4420", 00:17:11.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:11.311 "multipath": "failover", 00:17:11.311 "method": "bdev_nvme_attach_controller", 00:17:11.311 "req_id": 1 00:17:11.311 } 00:17:11.311 Got JSON-RPC error response 00:17:11.311 response: 00:17:11.311 { 00:17:11.311 "code": -114, 00:17:11.311 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:17:11.311 } 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.311 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.311 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.570 00:17:11.570 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.570 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:17:11.570 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:17:11.570 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.570 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:11.570 10:57:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.570 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:17:11.570 10:57:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:12.945 0 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2827961 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2827961 ']' 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2827961 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2827961 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2827961' 00:17:12.945 killing process with pid 2827961 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2827961 00:17:12.945 10:57:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2827961 00:17:12.945 10:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.945 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.945 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:12.945 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.945 10:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:17:12.945 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.945 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1607 -- # sort -u 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # cat 00:17:12.946 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:12.946 [2024-05-15 10:57:26.147894] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:12.946 [2024-05-15 10:57:26.148021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2827961 ] 00:17:12.946 EAL: No free 2048 kB hugepages reported on node 1 00:17:12.946 [2024-05-15 10:57:26.219719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.946 [2024-05-15 10:57:26.328393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.946 [2024-05-15 10:57:27.690239] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name 453b4e8e-56e3-457a-83c1-ed38518edf35 already exists 00:17:12.946 [2024-05-15 10:57:27.690278] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:453b4e8e-56e3-457a-83c1-ed38518edf35 alias for bdev NVMe1n1 00:17:12.946 [2024-05-15 10:57:27.690311] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:17:12.946 Running I/O for 1 seconds... 00:17:12.946 00:17:12.946 Latency(us) 00:17:12.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.946 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:17:12.946 NVMe0n1 : 1.01 16980.87 66.33 0.00 0.00 7505.22 2160.26 9709.04 00:17:12.946 =================================================================================================================== 00:17:12.946 Total : 16980.87 66.33 0.00 0.00 7505.22 2160.26 9709.04 00:17:12.946 Received shutdown signal, test time was about 1.000000 seconds 00:17:12.946 00:17:12.946 Latency(us) 00:17:12.946 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.946 =================================================================================================================== 00:17:12.946 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.946 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1614 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # read -r file 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:12.946 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:12.946 rmmod nvme_tcp 00:17:12.946 rmmod nvme_fabrics 00:17:13.205 rmmod nvme_keyring 00:17:13.205 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:13.205 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:17:13.205 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:17:13.205 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2827846 ']' 00:17:13.205 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2827846 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@946 -- # '[' -z 2827846 ']' 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@950 -- # kill -0 2827846 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # uname 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2827846 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2827846' 00:17:13.206 killing process with pid 2827846 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # kill 2827846 00:17:13.206 [2024-05-15 10:57:29.229337] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:13.206 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@970 -- # wait 2827846 00:17:13.466 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:13.466 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:13.466 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:13.466 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:13.466 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:13.466 10:57:29 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:13.466 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:13.466 10:57:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.370 10:57:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:15.370 00:17:15.370 real 0m8.663s 00:17:15.370 user 0m14.987s 00:17:15.370 sys 0m2.671s 00:17:15.370 10:57:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:15.370 10:57:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:17:15.370 ************************************ 00:17:15.370 END TEST nvmf_multicontroller 00:17:15.370 ************************************ 00:17:15.633 10:57:31 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:15.633 10:57:31 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:15.633 10:57:31 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:15.633 10:57:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:15.633 ************************************ 00:17:15.633 START TEST nvmf_aer 00:17:15.633 ************************************ 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:17:15.633 * Looking for test storage... 00:17:15.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:17:15.633 10:57:31 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:17:18.198 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:18.199 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:18.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:18.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:18.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:17:18.199 00:17:18.199 --- 10.0.0.2 ping statistics --- 00:17:18.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.199 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:17:18.199 00:17:18.199 --- 10.0.0.1 ping statistics --- 00:17:18.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.199 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2830624 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2830624 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@827 -- # '[' -z 2830624 ']' 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:18.199 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.199 [2024-05-15 10:57:34.243460] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:18.199 [2024-05-15 10:57:34.243547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.199 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.199 [2024-05-15 10:57:34.323799] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.458 [2024-05-15 10:57:34.444417] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.458 [2024-05-15 10:57:34.444479] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.458 [2024-05-15 10:57:34.444495] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.458 [2024-05-15 10:57:34.444508] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.458 [2024-05-15 10:57:34.444519] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.458 [2024-05-15 10:57:34.444581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.458 [2024-05-15 10:57:34.444635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.458 [2024-05-15 10:57:34.444750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.458 [2024-05-15 10:57:34.444753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@860 -- # return 0 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 [2024-05-15 10:57:34.596756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 Malloc0 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.458 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.459 [2024-05-15 10:57:34.647633] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:18.459 [2024-05-15 10:57:34.647951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.459 [ 00:17:18.459 { 00:17:18.459 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:18.459 "subtype": "Discovery", 00:17:18.459 "listen_addresses": [], 00:17:18.459 "allow_any_host": true, 00:17:18.459 "hosts": [] 00:17:18.459 }, 00:17:18.459 { 00:17:18.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.459 "subtype": "NVMe", 00:17:18.459 "listen_addresses": [ 00:17:18.459 { 00:17:18.459 "trtype": "TCP", 00:17:18.459 "adrfam": "IPv4", 00:17:18.459 "traddr": "10.0.0.2", 00:17:18.459 "trsvcid": "4420" 00:17:18.459 } 00:17:18.459 ], 00:17:18.459 "allow_any_host": true, 00:17:18.459 "hosts": [], 00:17:18.459 "serial_number": "SPDK00000000000001", 00:17:18.459 "model_number": "SPDK bdev Controller", 00:17:18.459 "max_namespaces": 2, 00:17:18.459 "min_cntlid": 1, 00:17:18.459 "max_cntlid": 65519, 00:17:18.459 "namespaces": [ 00:17:18.459 { 00:17:18.459 "nsid": 1, 00:17:18.459 "bdev_name": "Malloc0", 00:17:18.459 "name": "Malloc0", 00:17:18.459 "nguid": "996D8A7EB6954D7EA08519DB5A73A39D", 00:17:18.459 "uuid": "996d8a7e-b695-4d7e-a085-19db5a73a39d" 00:17:18.459 } 00:17:18.459 ] 00:17:18.459 } 00:17:18.459 ] 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2830650 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1261 -- # local i=0 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 0 -lt 200 ']' 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=1 00:17:18.459 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:17:18.717 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' 1 -lt 200 ']' 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # i=2 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # sleep 0.1 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # return 0 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.717 Malloc1 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.717 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.975 Asynchronous Event Request test 00:17:18.975 Attaching to 10.0.0.2 00:17:18.975 Attached to 10.0.0.2 00:17:18.975 Registering asynchronous event callbacks... 00:17:18.975 Starting namespace attribute notice tests for all controllers... 00:17:18.975 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:18.975 aer_cb - Changed Namespace 00:17:18.975 Cleaning up... 00:17:18.975 [ 00:17:18.975 { 00:17:18.975 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:18.975 "subtype": "Discovery", 00:17:18.975 "listen_addresses": [], 00:17:18.976 "allow_any_host": true, 00:17:18.976 "hosts": [] 00:17:18.976 }, 00:17:18.976 { 00:17:18.976 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:18.976 "subtype": "NVMe", 00:17:18.976 "listen_addresses": [ 00:17:18.976 { 00:17:18.976 "trtype": "TCP", 00:17:18.976 "adrfam": "IPv4", 00:17:18.976 "traddr": "10.0.0.2", 00:17:18.976 "trsvcid": "4420" 00:17:18.976 } 00:17:18.976 ], 00:17:18.976 "allow_any_host": true, 00:17:18.976 "hosts": [], 00:17:18.976 "serial_number": "SPDK00000000000001", 00:17:18.976 "model_number": "SPDK bdev Controller", 00:17:18.976 "max_namespaces": 2, 00:17:18.976 "min_cntlid": 1, 00:17:18.976 "max_cntlid": 65519, 00:17:18.976 "namespaces": [ 00:17:18.976 { 00:17:18.976 "nsid": 1, 00:17:18.976 "bdev_name": "Malloc0", 00:17:18.976 "name": "Malloc0", 00:17:18.976 "nguid": "996D8A7EB6954D7EA08519DB5A73A39D", 00:17:18.976 "uuid": "996d8a7e-b695-4d7e-a085-19db5a73a39d" 00:17:18.976 }, 00:17:18.976 { 00:17:18.976 "nsid": 2, 00:17:18.976 "bdev_name": "Malloc1", 00:17:18.976 "name": "Malloc1", 00:17:18.976 "nguid": "4CF6D96864594CF3955CF3974AE6E8C3", 00:17:18.976 "uuid": "4cf6d968-6459-4cf3-955c-f3974ae6e8c3" 00:17:18.976 } 00:17:18.976 ] 00:17:18.976 } 00:17:18.976 ] 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2830650 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.976 10:57:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:18.976 rmmod nvme_tcp 00:17:18.976 rmmod nvme_fabrics 00:17:18.976 rmmod nvme_keyring 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2830624 ']' 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2830624 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@946 -- # '[' -z 2830624 ']' 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@950 -- # kill -0 2830624 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # uname 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2830624 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2830624' 00:17:18.976 killing process with pid 2830624 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # kill 2830624 00:17:18.976 [2024-05-15 10:57:35.112104] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:18.976 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@970 -- # wait 2830624 00:17:19.235 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:19.235 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:19.235 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:19.235 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:19.235 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:19.235 10:57:35 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.235 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.235 10:57:35 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.767 10:57:37 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:21.767 00:17:21.767 real 0m5.790s 00:17:21.767 user 0m4.422s 00:17:21.767 sys 0m2.159s 00:17:21.767 10:57:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:21.767 10:57:37 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:17:21.767 ************************************ 00:17:21.767 END TEST nvmf_aer 00:17:21.767 ************************************ 00:17:21.767 10:57:37 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:21.767 10:57:37 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:21.767 10:57:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:21.767 10:57:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:21.767 ************************************ 00:17:21.767 START TEST nvmf_async_init 00:17:21.767 ************************************ 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:17:21.767 * Looking for test storage... 00:17:21.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ee70cfb54701486d8f1c4e8e5d8af060 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:17:21.767 10:57:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:24.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:24.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:24.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:24.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:24.305 10:57:39 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:24.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:24.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:17:24.305 00:17:24.305 --- 10.0.0.2 ping statistics --- 00:17:24.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.305 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:24.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:24.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:17:24.305 00:17:24.305 --- 10.0.0.1 ping statistics --- 00:17:24.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:24.305 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:17:24.305 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2832991 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2832991 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@827 -- # '[' -z 2832991 ']' 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:24.306 10:57:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:24.306 [2024-05-15 10:57:40.163380] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:24.306 [2024-05-15 10:57:40.163485] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.306 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.306 [2024-05-15 10:57:40.245772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.306 [2024-05-15 10:57:40.360859] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.306 [2024-05-15 10:57:40.360926] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.306 [2024-05-15 10:57:40.360952] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:24.306 [2024-05-15 10:57:40.360966] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:24.306 [2024-05-15 10:57:40.361002] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.306 [2024-05-15 10:57:40.361036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@860 -- # return 0 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 [2024-05-15 10:57:41.138329] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 null0 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ee70cfb54701486d8f1c4e8e5d8af060 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 [2024-05-15 10:57:41.178351] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:25.241 [2024-05-15 10:57:41.178605] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 nvme0n1 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.241 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.241 [ 00:17:25.241 { 00:17:25.241 "name": "nvme0n1", 00:17:25.241 "aliases": [ 00:17:25.241 "ee70cfb5-4701-486d-8f1c-4e8e5d8af060" 00:17:25.241 ], 00:17:25.241 "product_name": "NVMe disk", 00:17:25.241 "block_size": 512, 00:17:25.241 "num_blocks": 2097152, 00:17:25.241 "uuid": "ee70cfb5-4701-486d-8f1c-4e8e5d8af060", 00:17:25.241 "assigned_rate_limits": { 00:17:25.241 "rw_ios_per_sec": 0, 00:17:25.241 "rw_mbytes_per_sec": 0, 00:17:25.241 "r_mbytes_per_sec": 0, 00:17:25.241 "w_mbytes_per_sec": 0 00:17:25.242 }, 00:17:25.242 "claimed": false, 00:17:25.242 "zoned": false, 00:17:25.242 "supported_io_types": { 00:17:25.242 "read": true, 00:17:25.242 "write": true, 00:17:25.242 "unmap": false, 00:17:25.242 "write_zeroes": true, 00:17:25.242 "flush": true, 00:17:25.242 "reset": true, 00:17:25.242 "compare": true, 00:17:25.242 "compare_and_write": true, 00:17:25.242 "abort": true, 00:17:25.242 "nvme_admin": true, 00:17:25.242 "nvme_io": true 00:17:25.242 }, 00:17:25.242 "memory_domains": [ 00:17:25.242 { 00:17:25.242 "dma_device_id": "system", 00:17:25.242 "dma_device_type": 1 00:17:25.242 } 00:17:25.242 ], 00:17:25.242 "driver_specific": { 00:17:25.242 "nvme": [ 00:17:25.242 { 00:17:25.242 "trid": { 00:17:25.242 "trtype": "TCP", 00:17:25.242 "adrfam": "IPv4", 00:17:25.242 "traddr": "10.0.0.2", 00:17:25.242 "trsvcid": "4420", 00:17:25.242 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:25.242 }, 00:17:25.242 "ctrlr_data": { 00:17:25.242 "cntlid": 1, 00:17:25.242 "vendor_id": "0x8086", 00:17:25.242 "model_number": "SPDK bdev Controller", 00:17:25.242 "serial_number": "00000000000000000000", 00:17:25.242 "firmware_revision": "24.05", 00:17:25.242 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.242 "oacs": { 00:17:25.242 "security": 0, 00:17:25.242 "format": 0, 00:17:25.242 "firmware": 0, 00:17:25.242 "ns_manage": 0 00:17:25.242 }, 00:17:25.242 "multi_ctrlr": true, 00:17:25.242 "ana_reporting": false 00:17:25.242 }, 00:17:25.242 "vs": { 00:17:25.242 "nvme_version": "1.3" 00:17:25.242 }, 00:17:25.242 "ns_data": { 00:17:25.242 "id": 1, 00:17:25.242 "can_share": true 00:17:25.242 } 00:17:25.242 } 00:17:25.242 ], 00:17:25.242 "mp_policy": "active_passive" 00:17:25.242 } 00:17:25.242 } 00:17:25.242 ] 00:17:25.242 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.242 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:17:25.242 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.242 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.242 [2024-05-15 10:57:41.431143] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:17:25.242 [2024-05-15 10:57:41.431233] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230fb20 (9): Bad file descriptor 00:17:25.500 [2024-05-15 10:57:41.573088] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.500 [ 00:17:25.500 { 00:17:25.500 "name": "nvme0n1", 00:17:25.500 "aliases": [ 00:17:25.500 "ee70cfb5-4701-486d-8f1c-4e8e5d8af060" 00:17:25.500 ], 00:17:25.500 "product_name": "NVMe disk", 00:17:25.500 "block_size": 512, 00:17:25.500 "num_blocks": 2097152, 00:17:25.500 "uuid": "ee70cfb5-4701-486d-8f1c-4e8e5d8af060", 00:17:25.500 "assigned_rate_limits": { 00:17:25.500 "rw_ios_per_sec": 0, 00:17:25.500 "rw_mbytes_per_sec": 0, 00:17:25.500 "r_mbytes_per_sec": 0, 00:17:25.500 "w_mbytes_per_sec": 0 00:17:25.500 }, 00:17:25.500 "claimed": false, 00:17:25.500 "zoned": false, 00:17:25.500 "supported_io_types": { 00:17:25.500 "read": true, 00:17:25.500 "write": true, 00:17:25.500 "unmap": false, 00:17:25.500 "write_zeroes": true, 00:17:25.500 "flush": true, 00:17:25.500 "reset": true, 00:17:25.500 "compare": true, 00:17:25.500 "compare_and_write": true, 00:17:25.500 "abort": true, 00:17:25.500 "nvme_admin": true, 00:17:25.500 "nvme_io": true 00:17:25.500 }, 00:17:25.500 "memory_domains": [ 00:17:25.500 { 00:17:25.500 "dma_device_id": "system", 00:17:25.500 "dma_device_type": 1 00:17:25.500 } 00:17:25.500 ], 00:17:25.500 "driver_specific": { 00:17:25.500 "nvme": [ 00:17:25.500 { 00:17:25.500 "trid": { 00:17:25.500 "trtype": "TCP", 00:17:25.500 "adrfam": "IPv4", 00:17:25.500 "traddr": "10.0.0.2", 00:17:25.500 "trsvcid": "4420", 00:17:25.500 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:25.500 }, 00:17:25.500 "ctrlr_data": { 00:17:25.500 "cntlid": 2, 00:17:25.500 "vendor_id": "0x8086", 00:17:25.500 "model_number": "SPDK bdev Controller", 00:17:25.500 "serial_number": "00000000000000000000", 00:17:25.500 "firmware_revision": "24.05", 00:17:25.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.500 "oacs": { 00:17:25.500 "security": 0, 00:17:25.500 "format": 0, 00:17:25.500 "firmware": 0, 00:17:25.500 "ns_manage": 0 00:17:25.500 }, 00:17:25.500 "multi_ctrlr": true, 00:17:25.500 "ana_reporting": false 00:17:25.500 }, 00:17:25.500 "vs": { 00:17:25.500 "nvme_version": "1.3" 00:17:25.500 }, 00:17:25.500 "ns_data": { 00:17:25.500 "id": 1, 00:17:25.500 "can_share": true 00:17:25.500 } 00:17:25.500 } 00:17:25.500 ], 00:17:25.500 "mp_policy": "active_passive" 00:17:25.500 } 00:17:25.500 } 00:17:25.500 ] 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.WXcErkVwoG 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.WXcErkVwoG 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.500 [2024-05-15 10:57:41.623762] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:25.500 [2024-05-15 10:57:41.623887] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WXcErkVwoG 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.500 [2024-05-15 10:57:41.631784] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WXcErkVwoG 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.500 [2024-05-15 10:57:41.639801] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:25.500 [2024-05-15 10:57:41.639860] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:25.500 nvme0n1 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.500 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.500 [ 00:17:25.500 { 00:17:25.500 "name": "nvme0n1", 00:17:25.500 "aliases": [ 00:17:25.500 "ee70cfb5-4701-486d-8f1c-4e8e5d8af060" 00:17:25.500 ], 00:17:25.500 "product_name": "NVMe disk", 00:17:25.500 "block_size": 512, 00:17:25.500 "num_blocks": 2097152, 00:17:25.500 "uuid": "ee70cfb5-4701-486d-8f1c-4e8e5d8af060", 00:17:25.500 "assigned_rate_limits": { 00:17:25.500 "rw_ios_per_sec": 0, 00:17:25.500 "rw_mbytes_per_sec": 0, 00:17:25.500 "r_mbytes_per_sec": 0, 00:17:25.500 "w_mbytes_per_sec": 0 00:17:25.500 }, 00:17:25.500 "claimed": false, 00:17:25.500 "zoned": false, 00:17:25.500 "supported_io_types": { 00:17:25.500 "read": true, 00:17:25.500 "write": true, 00:17:25.500 "unmap": false, 00:17:25.500 "write_zeroes": true, 00:17:25.500 "flush": true, 00:17:25.500 "reset": true, 00:17:25.500 "compare": true, 00:17:25.500 "compare_and_write": true, 00:17:25.500 "abort": true, 00:17:25.500 "nvme_admin": true, 00:17:25.500 "nvme_io": true 00:17:25.500 }, 00:17:25.500 "memory_domains": [ 00:17:25.500 { 00:17:25.500 "dma_device_id": "system", 00:17:25.500 "dma_device_type": 1 00:17:25.500 } 00:17:25.500 ], 00:17:25.500 "driver_specific": { 00:17:25.500 "nvme": [ 00:17:25.500 { 00:17:25.500 "trid": { 00:17:25.500 "trtype": "TCP", 00:17:25.500 "adrfam": "IPv4", 00:17:25.500 "traddr": "10.0.0.2", 00:17:25.500 "trsvcid": "4421", 00:17:25.500 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:25.500 }, 00:17:25.500 "ctrlr_data": { 00:17:25.500 "cntlid": 3, 00:17:25.500 "vendor_id": "0x8086", 00:17:25.501 "model_number": "SPDK bdev Controller", 00:17:25.501 "serial_number": "00000000000000000000", 00:17:25.501 "firmware_revision": "24.05", 00:17:25.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:25.501 "oacs": { 00:17:25.501 "security": 0, 00:17:25.501 "format": 0, 00:17:25.501 "firmware": 0, 00:17:25.501 "ns_manage": 0 00:17:25.501 }, 00:17:25.501 "multi_ctrlr": true, 00:17:25.501 "ana_reporting": false 00:17:25.501 }, 00:17:25.501 "vs": { 00:17:25.501 "nvme_version": "1.3" 00:17:25.501 }, 00:17:25.501 "ns_data": { 00:17:25.501 "id": 1, 00:17:25.501 "can_share": true 00:17:25.501 } 00:17:25.501 } 00:17:25.501 ], 00:17:25.501 "mp_policy": "active_passive" 00:17:25.501 } 00:17:25.501 } 00:17:25.501 ] 00:17:25.501 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.501 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:25.501 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.501 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.WXcErkVwoG 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:25.759 rmmod nvme_tcp 00:17:25.759 rmmod nvme_fabrics 00:17:25.759 rmmod nvme_keyring 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2832991 ']' 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2832991 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@946 -- # '[' -z 2832991 ']' 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@950 -- # kill -0 2832991 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # uname 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2832991 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2832991' 00:17:25.759 killing process with pid 2832991 00:17:25.759 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # kill 2832991 00:17:25.759 [2024-05-15 10:57:41.821935] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:25.760 [2024-05-15 10:57:41.821988] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:25.760 [2024-05-15 10:57:41.822003] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:25.760 10:57:41 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@970 -- # wait 2832991 00:17:26.018 10:57:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:26.018 10:57:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:26.018 10:57:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:26.018 10:57:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:26.018 10:57:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:26.018 10:57:42 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.018 10:57:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.018 10:57:42 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.922 10:57:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:27.922 00:17:27.922 real 0m6.641s 00:17:27.922 user 0m3.143s 00:17:27.922 sys 0m2.121s 00:17:27.922 10:57:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:27.922 10:57:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:17:27.922 ************************************ 00:17:27.922 END TEST nvmf_async_init 00:17:27.922 ************************************ 00:17:27.922 10:57:44 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:27.922 10:57:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:27.922 10:57:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:27.922 10:57:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.181 ************************************ 00:17:28.181 START TEST dma 00:17:28.181 ************************************ 00:17:28.181 10:57:44 nvmf_tcp.dma -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:17:28.181 * Looking for test storage... 00:17:28.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:28.181 10:57:44 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.181 10:57:44 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.181 10:57:44 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.181 10:57:44 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.181 10:57:44 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.181 10:57:44 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.181 10:57:44 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.181 10:57:44 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:17:28.181 10:57:44 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.181 10:57:44 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.181 10:57:44 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:17:28.181 10:57:44 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:17:28.181 00:17:28.181 real 0m0.065s 00:17:28.181 user 0m0.033s 00:17:28.181 sys 0m0.037s 00:17:28.181 10:57:44 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:28.181 10:57:44 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:17:28.182 ************************************ 00:17:28.182 END TEST dma 00:17:28.182 ************************************ 00:17:28.182 10:57:44 nvmf_tcp -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:28.182 10:57:44 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:28.182 10:57:44 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:28.182 10:57:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:28.182 ************************************ 00:17:28.182 START TEST nvmf_identify 00:17:28.182 ************************************ 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:28.182 * Looking for test storage... 00:17:28.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:17:28.182 10:57:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:30.714 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:30.714 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:30.714 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:30.714 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:30.714 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:30.972 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:30.972 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:30.972 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:30.972 10:57:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:30.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:17:30.972 00:17:30.972 --- 10.0.0.2 ping statistics --- 00:17:30.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.972 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:17:30.972 00:17:30.972 --- 10.0.0.1 ping statistics --- 00:17:30.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.972 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2835545 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2835545 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@827 -- # '[' -z 2835545 ']' 00:17:30.972 10:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.973 10:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:30.973 10:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.973 10:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:30.973 10:57:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:30.973 [2024-05-15 10:57:47.103373] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:30.973 [2024-05-15 10:57:47.103438] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.973 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.973 [2024-05-15 10:57:47.180772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.231 [2024-05-15 10:57:47.300401] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.231 [2024-05-15 10:57:47.300464] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.231 [2024-05-15 10:57:47.300487] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.231 [2024-05-15 10:57:47.300498] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.231 [2024-05-15 10:57:47.300508] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.231 [2024-05-15 10:57:47.300585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.231 [2024-05-15 10:57:47.300641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.231 [2024-05-15 10:57:47.300717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.231 [2024-05-15 10:57:47.300720] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@860 -- # return 0 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 [2024-05-15 10:57:48.047485] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 Malloc0 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 [2024-05-15 10:57:48.128615] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:32.169 [2024-05-15 10:57:48.128898] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.169 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.169 [ 00:17:32.169 { 00:17:32.169 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:32.169 "subtype": "Discovery", 00:17:32.169 "listen_addresses": [ 00:17:32.169 { 00:17:32.169 "trtype": "TCP", 00:17:32.169 "adrfam": "IPv4", 00:17:32.169 "traddr": "10.0.0.2", 00:17:32.169 "trsvcid": "4420" 00:17:32.169 } 00:17:32.169 ], 00:17:32.169 "allow_any_host": true, 00:17:32.169 "hosts": [] 00:17:32.169 }, 00:17:32.169 { 00:17:32.169 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:32.169 "subtype": "NVMe", 00:17:32.169 "listen_addresses": [ 00:17:32.169 { 00:17:32.169 "trtype": "TCP", 00:17:32.169 "adrfam": "IPv4", 00:17:32.169 "traddr": "10.0.0.2", 00:17:32.169 "trsvcid": "4420" 00:17:32.169 } 00:17:32.169 ], 00:17:32.169 "allow_any_host": true, 00:17:32.169 "hosts": [], 00:17:32.169 "serial_number": "SPDK00000000000001", 00:17:32.169 "model_number": "SPDK bdev Controller", 00:17:32.169 "max_namespaces": 32, 00:17:32.169 "min_cntlid": 1, 00:17:32.169 "max_cntlid": 65519, 00:17:32.169 "namespaces": [ 00:17:32.169 { 00:17:32.169 "nsid": 1, 00:17:32.169 "bdev_name": "Malloc0", 00:17:32.169 "name": "Malloc0", 00:17:32.169 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:32.170 "eui64": "ABCDEF0123456789", 00:17:32.170 "uuid": "0992c90c-ce7a-44da-827c-3ffad3ddf834" 00:17:32.170 } 00:17:32.170 ] 00:17:32.170 } 00:17:32.170 ] 00:17:32.170 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.170 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:32.170 [2024-05-15 10:57:48.171357] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:32.170 [2024-05-15 10:57:48.171401] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835698 ] 00:17:32.170 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.170 [2024-05-15 10:57:48.205303] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:32.170 [2024-05-15 10:57:48.205373] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:32.170 [2024-05-15 10:57:48.205383] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:32.170 [2024-05-15 10:57:48.205397] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:32.170 [2024-05-15 10:57:48.205412] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:32.170 [2024-05-15 10:57:48.208989] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:32.170 [2024-05-15 10:57:48.209059] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x9a6c80 0 00:17:32.170 [2024-05-15 10:57:48.216955] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:32.170 [2024-05-15 10:57:48.216978] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:32.170 [2024-05-15 10:57:48.216993] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:32.170 [2024-05-15 10:57:48.217001] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:32.170 [2024-05-15 10:57:48.217062] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.217076] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.217085] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.170 [2024-05-15 10:57:48.217106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:32.170 [2024-05-15 10:57:48.217132] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.170 [2024-05-15 10:57:48.223964] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.170 [2024-05-15 10:57:48.223983] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.170 [2024-05-15 10:57:48.223991] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.223999] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.170 [2024-05-15 10:57:48.224025] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:32.170 [2024-05-15 10:57:48.224037] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:32.170 [2024-05-15 10:57:48.224047] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:32.170 [2024-05-15 10:57:48.224070] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.224079] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.224085] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.170 [2024-05-15 10:57:48.224096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.170 [2024-05-15 10:57:48.224120] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.170 [2024-05-15 10:57:48.224335] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.170 [2024-05-15 10:57:48.224351] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.170 [2024-05-15 10:57:48.224362] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.224370] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.170 [2024-05-15 10:57:48.224381] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:32.170 [2024-05-15 10:57:48.224396] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:32.170 [2024-05-15 10:57:48.224409] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.224417] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.224423] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.170 [2024-05-15 10:57:48.224449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.170 [2024-05-15 10:57:48.224471] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.170 [2024-05-15 10:57:48.224664] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.170 [2024-05-15 10:57:48.224680] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.170 [2024-05-15 10:57:48.224687] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.224694] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.170 [2024-05-15 10:57:48.224704] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:32.170 [2024-05-15 10:57:48.224720] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:32.170 [2024-05-15 10:57:48.224733] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.224740] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.224747] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.170 [2024-05-15 10:57:48.224758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.170 [2024-05-15 10:57:48.224779] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.170 [2024-05-15 10:57:48.224989] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.170 [2024-05-15 10:57:48.225005] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.170 [2024-05-15 10:57:48.225012] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225019] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.170 [2024-05-15 10:57:48.225030] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:32.170 [2024-05-15 10:57:48.225048] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225058] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225064] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.170 [2024-05-15 10:57:48.225075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.170 [2024-05-15 10:57:48.225096] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.170 [2024-05-15 10:57:48.225278] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.170 [2024-05-15 10:57:48.225294] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.170 [2024-05-15 10:57:48.225300] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225307] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.170 [2024-05-15 10:57:48.225322] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:32.170 [2024-05-15 10:57:48.225332] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:32.170 [2024-05-15 10:57:48.225347] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:32.170 [2024-05-15 10:57:48.225472] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:32.170 [2024-05-15 10:57:48.225481] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:32.170 [2024-05-15 10:57:48.225498] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225505] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225511] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.170 [2024-05-15 10:57:48.225521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.170 [2024-05-15 10:57:48.225541] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.170 [2024-05-15 10:57:48.225751] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.170 [2024-05-15 10:57:48.225767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.170 [2024-05-15 10:57:48.225774] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225781] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.170 [2024-05-15 10:57:48.225790] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:32.170 [2024-05-15 10:57:48.225809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225818] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.225825] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.170 [2024-05-15 10:57:48.225835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.170 [2024-05-15 10:57:48.225870] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.170 [2024-05-15 10:57:48.226070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.170 [2024-05-15 10:57:48.226086] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.170 [2024-05-15 10:57:48.226093] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.226100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.170 [2024-05-15 10:57:48.226108] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:32.170 [2024-05-15 10:57:48.226117] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:32.170 [2024-05-15 10:57:48.226133] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:32.170 [2024-05-15 10:57:48.226147] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:32.170 [2024-05-15 10:57:48.226164] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.170 [2024-05-15 10:57:48.226172] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.226183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.171 [2024-05-15 10:57:48.226209] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.171 [2024-05-15 10:57:48.226456] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.171 [2024-05-15 10:57:48.226472] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.171 [2024-05-15 10:57:48.226480] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.226593] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a6c80): datao=0, datal=4096, cccid=0 00:17:32.171 [2024-05-15 10:57:48.226604] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa05e40) on tqpair(0x9a6c80): expected_datao=0, payload_size=4096 00:17:32.171 [2024-05-15 10:57:48.226613] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.226626] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.226636] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.226650] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.171 [2024-05-15 10:57:48.226659] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.171 [2024-05-15 10:57:48.226666] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.226673] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.171 [2024-05-15 10:57:48.226685] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:32.171 [2024-05-15 10:57:48.226695] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:32.171 [2024-05-15 10:57:48.226703] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:32.171 [2024-05-15 10:57:48.226712] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:32.171 [2024-05-15 10:57:48.226736] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:32.171 [2024-05-15 10:57:48.226744] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:32.171 [2024-05-15 10:57:48.226766] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:32.171 [2024-05-15 10:57:48.226784] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.226807] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.226813] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.226824] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.171 [2024-05-15 10:57:48.226845] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.171 [2024-05-15 10:57:48.227072] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.171 [2024-05-15 10:57:48.227088] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.171 [2024-05-15 10:57:48.227095] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227101] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa05e40) on tqpair=0x9a6c80 00:17:32.171 [2024-05-15 10:57:48.227116] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227124] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227130] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.227140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.171 [2024-05-15 10:57:48.227151] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227162] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227169] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.227178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.171 [2024-05-15 10:57:48.227188] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227195] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227202] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.227211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.171 [2024-05-15 10:57:48.227236] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227243] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.227258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.171 [2024-05-15 10:57:48.227266] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:32.171 [2024-05-15 10:57:48.227303] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:32.171 [2024-05-15 10:57:48.227316] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227323] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.227333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.171 [2024-05-15 10:57:48.227355] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05e40, cid 0, qid 0 00:17:32.171 [2024-05-15 10:57:48.227380] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa05fa0, cid 1, qid 0 00:17:32.171 [2024-05-15 10:57:48.227387] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06100, cid 2, qid 0 00:17:32.171 [2024-05-15 10:57:48.227394] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.171 [2024-05-15 10:57:48.227401] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa063c0, cid 4, qid 0 00:17:32.171 [2024-05-15 10:57:48.227621] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.171 [2024-05-15 10:57:48.227637] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.171 [2024-05-15 10:57:48.227644] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227651] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa063c0) on tqpair=0x9a6c80 00:17:32.171 [2024-05-15 10:57:48.227677] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:32.171 [2024-05-15 10:57:48.227686] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:32.171 [2024-05-15 10:57:48.227706] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.227715] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.227725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.171 [2024-05-15 10:57:48.227746] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa063c0, cid 4, qid 0 00:17:32.171 [2024-05-15 10:57:48.231956] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.171 [2024-05-15 10:57:48.231973] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.171 [2024-05-15 10:57:48.231984] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.231991] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a6c80): datao=0, datal=4096, cccid=4 00:17:32.171 [2024-05-15 10:57:48.231999] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa063c0) on tqpair(0x9a6c80): expected_datao=0, payload_size=4096 00:17:32.171 [2024-05-15 10:57:48.232007] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232016] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232024] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232032] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.171 [2024-05-15 10:57:48.232041] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.171 [2024-05-15 10:57:48.232047] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232054] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa063c0) on tqpair=0x9a6c80 00:17:32.171 [2024-05-15 10:57:48.232079] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:32.171 [2024-05-15 10:57:48.232121] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232131] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.232142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.171 [2024-05-15 10:57:48.232154] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232161] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232168] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x9a6c80) 00:17:32.171 [2024-05-15 10:57:48.232177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.171 [2024-05-15 10:57:48.232204] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa063c0, cid 4, qid 0 00:17:32.171 [2024-05-15 10:57:48.232232] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06520, cid 5, qid 0 00:17:32.171 [2024-05-15 10:57:48.232491] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.171 [2024-05-15 10:57:48.232507] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.171 [2024-05-15 10:57:48.232528] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232535] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a6c80): datao=0, datal=1024, cccid=4 00:17:32.171 [2024-05-15 10:57:48.232542] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa063c0) on tqpair(0x9a6c80): expected_datao=0, payload_size=1024 00:17:32.171 [2024-05-15 10:57:48.232549] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232559] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232566] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232575] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.171 [2024-05-15 10:57:48.232583] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.171 [2024-05-15 10:57:48.232590] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.232596] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06520) on tqpair=0x9a6c80 00:17:32.171 [2024-05-15 10:57:48.274944] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.171 [2024-05-15 10:57:48.274965] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.171 [2024-05-15 10:57:48.274973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.171 [2024-05-15 10:57:48.274980] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa063c0) on tqpair=0x9a6c80 00:17:32.171 [2024-05-15 10:57:48.275016] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.275026] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a6c80) 00:17:32.172 [2024-05-15 10:57:48.275038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.172 [2024-05-15 10:57:48.275074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa063c0, cid 4, qid 0 00:17:32.172 [2024-05-15 10:57:48.275375] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.172 [2024-05-15 10:57:48.275392] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.172 [2024-05-15 10:57:48.275399] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.275405] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a6c80): datao=0, datal=3072, cccid=4 00:17:32.172 [2024-05-15 10:57:48.275413] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa063c0) on tqpair(0x9a6c80): expected_datao=0, payload_size=3072 00:17:32.172 [2024-05-15 10:57:48.275421] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.275442] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.275465] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.318950] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.172 [2024-05-15 10:57:48.318968] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.172 [2024-05-15 10:57:48.318990] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.318998] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa063c0) on tqpair=0x9a6c80 00:17:32.172 [2024-05-15 10:57:48.319015] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.319024] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x9a6c80) 00:17:32.172 [2024-05-15 10:57:48.319035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.172 [2024-05-15 10:57:48.319068] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa063c0, cid 4, qid 0 00:17:32.172 [2024-05-15 10:57:48.319295] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.172 [2024-05-15 10:57:48.319310] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.172 [2024-05-15 10:57:48.319318] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.319324] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x9a6c80): datao=0, datal=8, cccid=4 00:17:32.172 [2024-05-15 10:57:48.319332] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xa063c0) on tqpair(0x9a6c80): expected_datao=0, payload_size=8 00:17:32.172 [2024-05-15 10:57:48.319355] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.319365] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.319373] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.364025] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.172 [2024-05-15 10:57:48.364045] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.172 [2024-05-15 10:57:48.364053] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.172 [2024-05-15 10:57:48.364060] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa063c0) on tqpair=0x9a6c80 00:17:32.172 ===================================================== 00:17:32.172 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:32.172 ===================================================== 00:17:32.172 Controller Capabilities/Features 00:17:32.172 ================================ 00:17:32.172 Vendor ID: 0000 00:17:32.172 Subsystem Vendor ID: 0000 00:17:32.172 Serial Number: .................... 00:17:32.172 Model Number: ........................................ 00:17:32.172 Firmware Version: 24.05 00:17:32.172 Recommended Arb Burst: 0 00:17:32.172 IEEE OUI Identifier: 00 00 00 00:17:32.172 Multi-path I/O 00:17:32.172 May have multiple subsystem ports: No 00:17:32.172 May have multiple controllers: No 00:17:32.172 Associated with SR-IOV VF: No 00:17:32.172 Max Data Transfer Size: 131072 00:17:32.172 Max Number of Namespaces: 0 00:17:32.172 Max Number of I/O Queues: 1024 00:17:32.172 NVMe Specification Version (VS): 1.3 00:17:32.172 NVMe Specification Version (Identify): 1.3 00:17:32.172 Maximum Queue Entries: 128 00:17:32.172 Contiguous Queues Required: Yes 00:17:32.172 Arbitration Mechanisms Supported 00:17:32.172 Weighted Round Robin: Not Supported 00:17:32.172 Vendor Specific: Not Supported 00:17:32.172 Reset Timeout: 15000 ms 00:17:32.172 Doorbell Stride: 4 bytes 00:17:32.172 NVM Subsystem Reset: Not Supported 00:17:32.172 Command Sets Supported 00:17:32.172 NVM Command Set: Supported 00:17:32.172 Boot Partition: Not Supported 00:17:32.172 Memory Page Size Minimum: 4096 bytes 00:17:32.172 Memory Page Size Maximum: 4096 bytes 00:17:32.172 Persistent Memory Region: Not Supported 00:17:32.172 Optional Asynchronous Events Supported 00:17:32.172 Namespace Attribute Notices: Not Supported 00:17:32.172 Firmware Activation Notices: Not Supported 00:17:32.172 ANA Change Notices: Not Supported 00:17:32.172 PLE Aggregate Log Change Notices: Not Supported 00:17:32.172 LBA Status Info Alert Notices: Not Supported 00:17:32.172 EGE Aggregate Log Change Notices: Not Supported 00:17:32.172 Normal NVM Subsystem Shutdown event: Not Supported 00:17:32.172 Zone Descriptor Change Notices: Not Supported 00:17:32.172 Discovery Log Change Notices: Supported 00:17:32.172 Controller Attributes 00:17:32.172 128-bit Host Identifier: Not Supported 00:17:32.172 Non-Operational Permissive Mode: Not Supported 00:17:32.172 NVM Sets: Not Supported 00:17:32.172 Read Recovery Levels: Not Supported 00:17:32.172 Endurance Groups: Not Supported 00:17:32.172 Predictable Latency Mode: Not Supported 00:17:32.172 Traffic Based Keep ALive: Not Supported 00:17:32.172 Namespace Granularity: Not Supported 00:17:32.172 SQ Associations: Not Supported 00:17:32.172 UUID List: Not Supported 00:17:32.172 Multi-Domain Subsystem: Not Supported 00:17:32.172 Fixed Capacity Management: Not Supported 00:17:32.172 Variable Capacity Management: Not Supported 00:17:32.172 Delete Endurance Group: Not Supported 00:17:32.172 Delete NVM Set: Not Supported 00:17:32.172 Extended LBA Formats Supported: Not Supported 00:17:32.172 Flexible Data Placement Supported: Not Supported 00:17:32.172 00:17:32.172 Controller Memory Buffer Support 00:17:32.172 ================================ 00:17:32.172 Supported: No 00:17:32.172 00:17:32.172 Persistent Memory Region Support 00:17:32.172 ================================ 00:17:32.172 Supported: No 00:17:32.172 00:17:32.172 Admin Command Set Attributes 00:17:32.172 ============================ 00:17:32.172 Security Send/Receive: Not Supported 00:17:32.172 Format NVM: Not Supported 00:17:32.172 Firmware Activate/Download: Not Supported 00:17:32.172 Namespace Management: Not Supported 00:17:32.172 Device Self-Test: Not Supported 00:17:32.172 Directives: Not Supported 00:17:32.172 NVMe-MI: Not Supported 00:17:32.172 Virtualization Management: Not Supported 00:17:32.172 Doorbell Buffer Config: Not Supported 00:17:32.172 Get LBA Status Capability: Not Supported 00:17:32.172 Command & Feature Lockdown Capability: Not Supported 00:17:32.172 Abort Command Limit: 1 00:17:32.172 Async Event Request Limit: 4 00:17:32.172 Number of Firmware Slots: N/A 00:17:32.172 Firmware Slot 1 Read-Only: N/A 00:17:32.172 Firmware Activation Without Reset: N/A 00:17:32.172 Multiple Update Detection Support: N/A 00:17:32.172 Firmware Update Granularity: No Information Provided 00:17:32.172 Per-Namespace SMART Log: No 00:17:32.172 Asymmetric Namespace Access Log Page: Not Supported 00:17:32.172 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:32.172 Command Effects Log Page: Not Supported 00:17:32.172 Get Log Page Extended Data: Supported 00:17:32.172 Telemetry Log Pages: Not Supported 00:17:32.172 Persistent Event Log Pages: Not Supported 00:17:32.172 Supported Log Pages Log Page: May Support 00:17:32.172 Commands Supported & Effects Log Page: Not Supported 00:17:32.172 Feature Identifiers & Effects Log Page:May Support 00:17:32.172 NVMe-MI Commands & Effects Log Page: May Support 00:17:32.172 Data Area 4 for Telemetry Log: Not Supported 00:17:32.172 Error Log Page Entries Supported: 128 00:17:32.172 Keep Alive: Not Supported 00:17:32.172 00:17:32.172 NVM Command Set Attributes 00:17:32.172 ========================== 00:17:32.172 Submission Queue Entry Size 00:17:32.172 Max: 1 00:17:32.172 Min: 1 00:17:32.172 Completion Queue Entry Size 00:17:32.172 Max: 1 00:17:32.172 Min: 1 00:17:32.172 Number of Namespaces: 0 00:17:32.172 Compare Command: Not Supported 00:17:32.172 Write Uncorrectable Command: Not Supported 00:17:32.172 Dataset Management Command: Not Supported 00:17:32.172 Write Zeroes Command: Not Supported 00:17:32.172 Set Features Save Field: Not Supported 00:17:32.172 Reservations: Not Supported 00:17:32.172 Timestamp: Not Supported 00:17:32.172 Copy: Not Supported 00:17:32.172 Volatile Write Cache: Not Present 00:17:32.172 Atomic Write Unit (Normal): 1 00:17:32.172 Atomic Write Unit (PFail): 1 00:17:32.172 Atomic Compare & Write Unit: 1 00:17:32.172 Fused Compare & Write: Supported 00:17:32.172 Scatter-Gather List 00:17:32.172 SGL Command Set: Supported 00:17:32.172 SGL Keyed: Supported 00:17:32.172 SGL Bit Bucket Descriptor: Not Supported 00:17:32.172 SGL Metadata Pointer: Not Supported 00:17:32.172 Oversized SGL: Not Supported 00:17:32.172 SGL Metadata Address: Not Supported 00:17:32.172 SGL Offset: Supported 00:17:32.172 Transport SGL Data Block: Not Supported 00:17:32.172 Replay Protected Memory Block: Not Supported 00:17:32.172 00:17:32.172 Firmware Slot Information 00:17:32.172 ========================= 00:17:32.172 Active slot: 0 00:17:32.173 00:17:32.173 00:17:32.173 Error Log 00:17:32.173 ========= 00:17:32.173 00:17:32.173 Active Namespaces 00:17:32.173 ================= 00:17:32.173 Discovery Log Page 00:17:32.173 ================== 00:17:32.173 Generation Counter: 2 00:17:32.173 Number of Records: 2 00:17:32.173 Record Format: 0 00:17:32.173 00:17:32.173 Discovery Log Entry 0 00:17:32.173 ---------------------- 00:17:32.173 Transport Type: 3 (TCP) 00:17:32.173 Address Family: 1 (IPv4) 00:17:32.173 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:32.173 Entry Flags: 00:17:32.173 Duplicate Returned Information: 1 00:17:32.173 Explicit Persistent Connection Support for Discovery: 1 00:17:32.173 Transport Requirements: 00:17:32.173 Secure Channel: Not Required 00:17:32.173 Port ID: 0 (0x0000) 00:17:32.173 Controller ID: 65535 (0xffff) 00:17:32.173 Admin Max SQ Size: 128 00:17:32.173 Transport Service Identifier: 4420 00:17:32.173 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:32.173 Transport Address: 10.0.0.2 00:17:32.173 Discovery Log Entry 1 00:17:32.173 ---------------------- 00:17:32.173 Transport Type: 3 (TCP) 00:17:32.173 Address Family: 1 (IPv4) 00:17:32.173 Subsystem Type: 2 (NVM Subsystem) 00:17:32.173 Entry Flags: 00:17:32.173 Duplicate Returned Information: 0 00:17:32.173 Explicit Persistent Connection Support for Discovery: 0 00:17:32.173 Transport Requirements: 00:17:32.173 Secure Channel: Not Required 00:17:32.173 Port ID: 0 (0x0000) 00:17:32.173 Controller ID: 65535 (0xffff) 00:17:32.173 Admin Max SQ Size: 128 00:17:32.173 Transport Service Identifier: 4420 00:17:32.173 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:32.173 Transport Address: 10.0.0.2 [2024-05-15 10:57:48.364174] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:32.173 [2024-05-15 10:57:48.364203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.173 [2024-05-15 10:57:48.364217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.173 [2024-05-15 10:57:48.364228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.173 [2024-05-15 10:57:48.364256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.173 [2024-05-15 10:57:48.364271] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.364279] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.364286] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.173 [2024-05-15 10:57:48.364297] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.173 [2024-05-15 10:57:48.364336] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.173 [2024-05-15 10:57:48.364545] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.173 [2024-05-15 10:57:48.364562] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.173 [2024-05-15 10:57:48.364569] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.364576] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.173 [2024-05-15 10:57:48.364589] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.364598] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.364604] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.173 [2024-05-15 10:57:48.364615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.173 [2024-05-15 10:57:48.364644] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.173 [2024-05-15 10:57:48.364876] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.173 [2024-05-15 10:57:48.364892] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.173 [2024-05-15 10:57:48.364899] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.364906] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.173 [2024-05-15 10:57:48.364915] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:32.173 [2024-05-15 10:57:48.364925] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:32.173 [2024-05-15 10:57:48.364951] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.364961] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.364968] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.173 [2024-05-15 10:57:48.364979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.173 [2024-05-15 10:57:48.365000] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.173 [2024-05-15 10:57:48.365184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.173 [2024-05-15 10:57:48.365199] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.173 [2024-05-15 10:57:48.365206] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365213] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.173 [2024-05-15 10:57:48.365233] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365242] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365249] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.173 [2024-05-15 10:57:48.365259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.173 [2024-05-15 10:57:48.365296] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.173 [2024-05-15 10:57:48.365487] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.173 [2024-05-15 10:57:48.365503] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.173 [2024-05-15 10:57:48.365510] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365517] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.173 [2024-05-15 10:57:48.365536] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365545] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365552] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.173 [2024-05-15 10:57:48.365563] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.173 [2024-05-15 10:57:48.365584] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.173 [2024-05-15 10:57:48.365766] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.173 [2024-05-15 10:57:48.365782] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.173 [2024-05-15 10:57:48.365789] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365796] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.173 [2024-05-15 10:57:48.365814] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365824] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.365830] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.173 [2024-05-15 10:57:48.365841] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.173 [2024-05-15 10:57:48.365877] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.173 [2024-05-15 10:57:48.366077] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.173 [2024-05-15 10:57:48.366093] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.173 [2024-05-15 10:57:48.366100] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.366107] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.173 [2024-05-15 10:57:48.366126] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.366136] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.366142] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.173 [2024-05-15 10:57:48.366153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.173 [2024-05-15 10:57:48.366174] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.173 [2024-05-15 10:57:48.366359] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.173 [2024-05-15 10:57:48.366374] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.173 [2024-05-15 10:57:48.366381] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.366387] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.173 [2024-05-15 10:57:48.366406] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.366415] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.366422] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.173 [2024-05-15 10:57:48.366433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.173 [2024-05-15 10:57:48.366469] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.173 [2024-05-15 10:57:48.366658] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.173 [2024-05-15 10:57:48.366678] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.173 [2024-05-15 10:57:48.366686] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.173 [2024-05-15 10:57:48.366693] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.173 [2024-05-15 10:57:48.366712] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.366722] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.366729] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.366739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.366761] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.366978] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.366994] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.367001] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367008] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.367026] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367035] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367042] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.367053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.367074] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.367256] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.367271] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.367278] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367285] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.367303] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367313] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367320] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.367330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.367365] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.367558] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.367574] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.367581] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367588] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.367607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367617] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367623] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.367634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.367655] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.367865] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.367881] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.367892] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367900] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.367919] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367934] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.367942] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.367953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.367974] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.368162] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.368178] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.368185] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368192] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.368210] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368220] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368227] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.368237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.368274] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.368470] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.368486] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.368493] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368500] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.368519] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368529] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368536] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.368546] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.368567] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.368739] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.368754] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.368761] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368768] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.368786] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368796] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.368803] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.368813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.368834] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.372947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.372965] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.372973] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.372984] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.373014] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.373025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.373032] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x9a6c80) 00:17:32.174 [2024-05-15 10:57:48.373043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.174 [2024-05-15 10:57:48.373066] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xa06260, cid 3, qid 0 00:17:32.174 [2024-05-15 10:57:48.373249] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.174 [2024-05-15 10:57:48.373265] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.174 [2024-05-15 10:57:48.373272] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.174 [2024-05-15 10:57:48.373279] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xa06260) on tqpair=0x9a6c80 00:17:32.174 [2024-05-15 10:57:48.373292] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:17:32.174 00:17:32.174 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:32.436 [2024-05-15 10:57:48.407667] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:32.437 [2024-05-15 10:57:48.407713] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2835708 ] 00:17:32.437 EAL: No free 2048 kB hugepages reported on node 1 00:17:32.437 [2024-05-15 10:57:48.441637] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:32.437 [2024-05-15 10:57:48.441682] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:32.437 [2024-05-15 10:57:48.441692] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:32.437 [2024-05-15 10:57:48.441705] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:32.437 [2024-05-15 10:57:48.441717] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:32.437 [2024-05-15 10:57:48.444981] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:32.437 [2024-05-15 10:57:48.445019] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d34c80 0 00:17:32.437 [2024-05-15 10:57:48.452221] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:32.437 [2024-05-15 10:57:48.452241] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:32.437 [2024-05-15 10:57:48.452253] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:32.437 [2024-05-15 10:57:48.452261] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:32.437 [2024-05-15 10:57:48.452298] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.452317] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.452324] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.437 [2024-05-15 10:57:48.452339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:32.437 [2024-05-15 10:57:48.452364] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.437 [2024-05-15 10:57:48.458943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.437 [2024-05-15 10:57:48.458960] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.437 [2024-05-15 10:57:48.459001] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459009] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.437 [2024-05-15 10:57:48.459025] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:32.437 [2024-05-15 10:57:48.459035] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:32.437 [2024-05-15 10:57:48.459045] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:32.437 [2024-05-15 10:57:48.459061] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459070] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459076] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.437 [2024-05-15 10:57:48.459088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.437 [2024-05-15 10:57:48.459112] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.437 [2024-05-15 10:57:48.459328] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.437 [2024-05-15 10:57:48.459343] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.437 [2024-05-15 10:57:48.459350] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459357] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.437 [2024-05-15 10:57:48.459366] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:32.437 [2024-05-15 10:57:48.459379] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:32.437 [2024-05-15 10:57:48.459392] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459399] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459406] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.437 [2024-05-15 10:57:48.459416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.437 [2024-05-15 10:57:48.459437] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.437 [2024-05-15 10:57:48.459639] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.437 [2024-05-15 10:57:48.459655] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.437 [2024-05-15 10:57:48.459661] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459668] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.437 [2024-05-15 10:57:48.459678] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:32.437 [2024-05-15 10:57:48.459691] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:32.437 [2024-05-15 10:57:48.459703] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459711] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.459717] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.437 [2024-05-15 10:57:48.459728] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.437 [2024-05-15 10:57:48.459749] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.437 [2024-05-15 10:57:48.459971] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.437 [2024-05-15 10:57:48.459990] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.437 [2024-05-15 10:57:48.459998] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460005] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.437 [2024-05-15 10:57:48.460015] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:32.437 [2024-05-15 10:57:48.460031] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460040] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460047] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.437 [2024-05-15 10:57:48.460057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.437 [2024-05-15 10:57:48.460078] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.437 [2024-05-15 10:57:48.460290] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.437 [2024-05-15 10:57:48.460305] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.437 [2024-05-15 10:57:48.460312] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460319] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.437 [2024-05-15 10:57:48.460327] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:32.437 [2024-05-15 10:57:48.460336] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:32.437 [2024-05-15 10:57:48.460349] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:32.437 [2024-05-15 10:57:48.460458] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:32.437 [2024-05-15 10:57:48.460465] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:32.437 [2024-05-15 10:57:48.460493] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460501] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460507] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.437 [2024-05-15 10:57:48.460517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.437 [2024-05-15 10:57:48.460537] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.437 [2024-05-15 10:57:48.460748] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.437 [2024-05-15 10:57:48.460763] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.437 [2024-05-15 10:57:48.460770] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460776] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.437 [2024-05-15 10:57:48.460786] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:32.437 [2024-05-15 10:57:48.460803] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460812] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.460819] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.437 [2024-05-15 10:57:48.460829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.437 [2024-05-15 10:57:48.460850] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.437 [2024-05-15 10:57:48.461064] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.437 [2024-05-15 10:57:48.461079] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.437 [2024-05-15 10:57:48.461086] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.461092] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.437 [2024-05-15 10:57:48.461101] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:32.437 [2024-05-15 10:57:48.461109] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:32.437 [2024-05-15 10:57:48.461122] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:32.437 [2024-05-15 10:57:48.461136] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:32.437 [2024-05-15 10:57:48.461149] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.437 [2024-05-15 10:57:48.461158] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.437 [2024-05-15 10:57:48.461169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.437 [2024-05-15 10:57:48.461190] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.437 [2024-05-15 10:57:48.461449] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.437 [2024-05-15 10:57:48.461462] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.437 [2024-05-15 10:57:48.461469] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.461475] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34c80): datao=0, datal=4096, cccid=0 00:17:32.438 [2024-05-15 10:57:48.461483] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d93e40) on tqpair(0x1d34c80): expected_datao=0, payload_size=4096 00:17:32.438 [2024-05-15 10:57:48.461490] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.461501] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.461509] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.461590] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.438 [2024-05-15 10:57:48.461608] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.438 [2024-05-15 10:57:48.461617] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.461626] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.438 [2024-05-15 10:57:48.461643] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:32.438 [2024-05-15 10:57:48.461654] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:32.438 [2024-05-15 10:57:48.461664] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:32.438 [2024-05-15 10:57:48.461673] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:32.438 [2024-05-15 10:57:48.461683] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:32.438 [2024-05-15 10:57:48.461694] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.461717] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.461737] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.461748] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.461774] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.461788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.438 [2024-05-15 10:57:48.461815] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.438 [2024-05-15 10:57:48.462077] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.438 [2024-05-15 10:57:48.462096] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.438 [2024-05-15 10:57:48.462106] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462115] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d93e40) on tqpair=0x1d34c80 00:17:32.438 [2024-05-15 10:57:48.462130] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462140] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462149] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.462162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.438 [2024-05-15 10:57:48.462176] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462185] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462194] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.462206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.438 [2024-05-15 10:57:48.462233] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462242] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462250] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.462262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.438 [2024-05-15 10:57:48.462274] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462283] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462291] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.462303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.438 [2024-05-15 10:57:48.462314] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.462337] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.462354] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462363] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.462376] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.438 [2024-05-15 10:57:48.462403] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93e40, cid 0, qid 0 00:17:32.438 [2024-05-15 10:57:48.462431] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d93fa0, cid 1, qid 0 00:17:32.438 [2024-05-15 10:57:48.462442] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94100, cid 2, qid 0 00:17:32.438 [2024-05-15 10:57:48.462457] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94260, cid 3, qid 0 00:17:32.438 [2024-05-15 10:57:48.462470] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d943c0, cid 4, qid 0 00:17:32.438 [2024-05-15 10:57:48.462682] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.438 [2024-05-15 10:57:48.462698] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.438 [2024-05-15 10:57:48.462705] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462712] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d943c0) on tqpair=0x1d34c80 00:17:32.438 [2024-05-15 10:57:48.462721] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:32.438 [2024-05-15 10:57:48.462730] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.462745] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.462770] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.462782] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462790] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.462796] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.462807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:32.438 [2024-05-15 10:57:48.462829] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d943c0, cid 4, qid 0 00:17:32.438 [2024-05-15 10:57:48.466947] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.438 [2024-05-15 10:57:48.466963] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.438 [2024-05-15 10:57:48.466970] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.466977] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d943c0) on tqpair=0x1d34c80 00:17:32.438 [2024-05-15 10:57:48.467034] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.467054] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.467069] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.467076] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.467087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.438 [2024-05-15 10:57:48.467109] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d943c0, cid 4, qid 0 00:17:32.438 [2024-05-15 10:57:48.467338] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.438 [2024-05-15 10:57:48.467354] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.438 [2024-05-15 10:57:48.467360] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.467367] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34c80): datao=0, datal=4096, cccid=4 00:17:32.438 [2024-05-15 10:57:48.467374] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d943c0) on tqpair(0x1d34c80): expected_datao=0, payload_size=4096 00:17:32.438 [2024-05-15 10:57:48.467382] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.467448] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.467458] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.467621] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.438 [2024-05-15 10:57:48.467636] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.438 [2024-05-15 10:57:48.467643] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.467654] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d943c0) on tqpair=0x1d34c80 00:17:32.438 [2024-05-15 10:57:48.467677] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:32.438 [2024-05-15 10:57:48.467702] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.467720] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:32.438 [2024-05-15 10:57:48.467733] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.467742] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34c80) 00:17:32.438 [2024-05-15 10:57:48.467752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.438 [2024-05-15 10:57:48.467774] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d943c0, cid 4, qid 0 00:17:32.438 [2024-05-15 10:57:48.468022] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.438 [2024-05-15 10:57:48.468036] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.438 [2024-05-15 10:57:48.468043] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.468049] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34c80): datao=0, datal=4096, cccid=4 00:17:32.438 [2024-05-15 10:57:48.468057] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d943c0) on tqpair(0x1d34c80): expected_datao=0, payload_size=4096 00:17:32.438 [2024-05-15 10:57:48.468064] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.468074] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.468082] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.438 [2024-05-15 10:57:48.468163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.439 [2024-05-15 10:57:48.468174] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.439 [2024-05-15 10:57:48.468181] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468187] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d943c0) on tqpair=0x1d34c80 00:17:32.439 [2024-05-15 10:57:48.468207] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:32.439 [2024-05-15 10:57:48.468225] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:32.439 [2024-05-15 10:57:48.468238] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468246] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.468257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.468278] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d943c0, cid 4, qid 0 00:17:32.439 [2024-05-15 10:57:48.468516] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.439 [2024-05-15 10:57:48.468531] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.439 [2024-05-15 10:57:48.468538] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468544] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34c80): datao=0, datal=4096, cccid=4 00:17:32.439 [2024-05-15 10:57:48.468552] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d943c0) on tqpair(0x1d34c80): expected_datao=0, payload_size=4096 00:17:32.439 [2024-05-15 10:57:48.468560] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468570] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468577] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468679] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.439 [2024-05-15 10:57:48.468691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.439 [2024-05-15 10:57:48.468698] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468705] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d943c0) on tqpair=0x1d34c80 00:17:32.439 [2024-05-15 10:57:48.468726] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:32.439 [2024-05-15 10:57:48.468742] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:32.439 [2024-05-15 10:57:48.468756] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:32.439 [2024-05-15 10:57:48.468767] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:32.439 [2024-05-15 10:57:48.468776] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:32.439 [2024-05-15 10:57:48.468785] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:32.439 [2024-05-15 10:57:48.468793] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:32.439 [2024-05-15 10:57:48.468802] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:32.439 [2024-05-15 10:57:48.468839] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468848] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.468859] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.468870] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468877] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.468883] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.468892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:32.439 [2024-05-15 10:57:48.468916] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d943c0, cid 4, qid 0 00:17:32.439 [2024-05-15 10:57:48.468951] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94520, cid 5, qid 0 00:17:32.439 [2024-05-15 10:57:48.469174] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.439 [2024-05-15 10:57:48.469186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.439 [2024-05-15 10:57:48.469193] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.469199] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d943c0) on tqpair=0x1d34c80 00:17:32.439 [2024-05-15 10:57:48.469210] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.439 [2024-05-15 10:57:48.469220] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.439 [2024-05-15 10:57:48.469226] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.469232] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94520) on tqpair=0x1d34c80 00:17:32.439 [2024-05-15 10:57:48.469248] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.469257] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.469268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.469292] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94520, cid 5, qid 0 00:17:32.439 [2024-05-15 10:57:48.469515] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.439 [2024-05-15 10:57:48.469527] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.439 [2024-05-15 10:57:48.469534] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.469541] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94520) on tqpair=0x1d34c80 00:17:32.439 [2024-05-15 10:57:48.469557] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.469566] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.469576] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.469596] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94520, cid 5, qid 0 00:17:32.439 [2024-05-15 10:57:48.469822] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.439 [2024-05-15 10:57:48.469837] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.439 [2024-05-15 10:57:48.469844] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.469850] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94520) on tqpair=0x1d34c80 00:17:32.439 [2024-05-15 10:57:48.469868] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.469877] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.469887] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.469908] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94520, cid 5, qid 0 00:17:32.439 [2024-05-15 10:57:48.470116] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.439 [2024-05-15 10:57:48.470131] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.439 [2024-05-15 10:57:48.470138] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470144] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94520) on tqpair=0x1d34c80 00:17:32.439 [2024-05-15 10:57:48.470165] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470175] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.470186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.470198] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470206] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.470215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.470227] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470235] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.470244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.470261] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470270] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d34c80) 00:17:32.439 [2024-05-15 10:57:48.470279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.439 [2024-05-15 10:57:48.470321] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94520, cid 5, qid 0 00:17:32.439 [2024-05-15 10:57:48.470333] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d943c0, cid 4, qid 0 00:17:32.439 [2024-05-15 10:57:48.470340] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94680, cid 6, qid 0 00:17:32.439 [2024-05-15 10:57:48.470348] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d947e0, cid 7, qid 0 00:17:32.439 [2024-05-15 10:57:48.470785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.439 [2024-05-15 10:57:48.470801] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.439 [2024-05-15 10:57:48.470808] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470814] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34c80): datao=0, datal=8192, cccid=5 00:17:32.439 [2024-05-15 10:57:48.470822] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d94520) on tqpair(0x1d34c80): expected_datao=0, payload_size=8192 00:17:32.439 [2024-05-15 10:57:48.470829] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470839] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470847] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470855] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.439 [2024-05-15 10:57:48.470864] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.439 [2024-05-15 10:57:48.470871] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470877] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34c80): datao=0, datal=512, cccid=4 00:17:32.439 [2024-05-15 10:57:48.470884] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d943c0) on tqpair(0x1d34c80): expected_datao=0, payload_size=512 00:17:32.439 [2024-05-15 10:57:48.470892] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470901] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470908] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.439 [2024-05-15 10:57:48.470917] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.439 [2024-05-15 10:57:48.470925] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.440 [2024-05-15 10:57:48.474941] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.474950] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34c80): datao=0, datal=512, cccid=6 00:17:32.440 [2024-05-15 10:57:48.474958] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d94680) on tqpair(0x1d34c80): expected_datao=0, payload_size=512 00:17:32.440 [2024-05-15 10:57:48.474965] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.474976] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.474983] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.474991] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:32.440 [2024-05-15 10:57:48.475000] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:32.440 [2024-05-15 10:57:48.475006] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.475012] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d34c80): datao=0, datal=4096, cccid=7 00:17:32.440 [2024-05-15 10:57:48.475019] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d947e0) on tqpair(0x1d34c80): expected_datao=0, payload_size=4096 00:17:32.440 [2024-05-15 10:57:48.475026] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.475036] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.475043] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.475054] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.440 [2024-05-15 10:57:48.475063] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.440 [2024-05-15 10:57:48.475074] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.475081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94520) on tqpair=0x1d34c80 00:17:32.440 [2024-05-15 10:57:48.475101] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.440 [2024-05-15 10:57:48.475111] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.440 [2024-05-15 10:57:48.475118] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.475124] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d943c0) on tqpair=0x1d34c80 00:17:32.440 [2024-05-15 10:57:48.475139] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.440 [2024-05-15 10:57:48.475149] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.440 [2024-05-15 10:57:48.475155] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.475161] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94680) on tqpair=0x1d34c80 00:17:32.440 [2024-05-15 10:57:48.475176] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.440 [2024-05-15 10:57:48.475186] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.440 [2024-05-15 10:57:48.475192] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.440 [2024-05-15 10:57:48.475199] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d947e0) on tqpair=0x1d34c80 00:17:32.440 ===================================================== 00:17:32.440 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:32.440 ===================================================== 00:17:32.440 Controller Capabilities/Features 00:17:32.440 ================================ 00:17:32.440 Vendor ID: 8086 00:17:32.440 Subsystem Vendor ID: 8086 00:17:32.440 Serial Number: SPDK00000000000001 00:17:32.440 Model Number: SPDK bdev Controller 00:17:32.440 Firmware Version: 24.05 00:17:32.440 Recommended Arb Burst: 6 00:17:32.440 IEEE OUI Identifier: e4 d2 5c 00:17:32.440 Multi-path I/O 00:17:32.440 May have multiple subsystem ports: Yes 00:17:32.440 May have multiple controllers: Yes 00:17:32.440 Associated with SR-IOV VF: No 00:17:32.440 Max Data Transfer Size: 131072 00:17:32.440 Max Number of Namespaces: 32 00:17:32.440 Max Number of I/O Queues: 127 00:17:32.440 NVMe Specification Version (VS): 1.3 00:17:32.440 NVMe Specification Version (Identify): 1.3 00:17:32.440 Maximum Queue Entries: 128 00:17:32.440 Contiguous Queues Required: Yes 00:17:32.440 Arbitration Mechanisms Supported 00:17:32.440 Weighted Round Robin: Not Supported 00:17:32.440 Vendor Specific: Not Supported 00:17:32.440 Reset Timeout: 15000 ms 00:17:32.440 Doorbell Stride: 4 bytes 00:17:32.440 NVM Subsystem Reset: Not Supported 00:17:32.440 Command Sets Supported 00:17:32.440 NVM Command Set: Supported 00:17:32.440 Boot Partition: Not Supported 00:17:32.440 Memory Page Size Minimum: 4096 bytes 00:17:32.440 Memory Page Size Maximum: 4096 bytes 00:17:32.440 Persistent Memory Region: Not Supported 00:17:32.440 Optional Asynchronous Events Supported 00:17:32.440 Namespace Attribute Notices: Supported 00:17:32.440 Firmware Activation Notices: Not Supported 00:17:32.440 ANA Change Notices: Not Supported 00:17:32.440 PLE Aggregate Log Change Notices: Not Supported 00:17:32.440 LBA Status Info Alert Notices: Not Supported 00:17:32.440 EGE Aggregate Log Change Notices: Not Supported 00:17:32.440 Normal NVM Subsystem Shutdown event: Not Supported 00:17:32.440 Zone Descriptor Change Notices: Not Supported 00:17:32.440 Discovery Log Change Notices: Not Supported 00:17:32.440 Controller Attributes 00:17:32.440 128-bit Host Identifier: Supported 00:17:32.440 Non-Operational Permissive Mode: Not Supported 00:17:32.440 NVM Sets: Not Supported 00:17:32.440 Read Recovery Levels: Not Supported 00:17:32.440 Endurance Groups: Not Supported 00:17:32.440 Predictable Latency Mode: Not Supported 00:17:32.440 Traffic Based Keep ALive: Not Supported 00:17:32.440 Namespace Granularity: Not Supported 00:17:32.440 SQ Associations: Not Supported 00:17:32.440 UUID List: Not Supported 00:17:32.440 Multi-Domain Subsystem: Not Supported 00:17:32.440 Fixed Capacity Management: Not Supported 00:17:32.440 Variable Capacity Management: Not Supported 00:17:32.440 Delete Endurance Group: Not Supported 00:17:32.440 Delete NVM Set: Not Supported 00:17:32.440 Extended LBA Formats Supported: Not Supported 00:17:32.440 Flexible Data Placement Supported: Not Supported 00:17:32.440 00:17:32.440 Controller Memory Buffer Support 00:17:32.440 ================================ 00:17:32.440 Supported: No 00:17:32.440 00:17:32.440 Persistent Memory Region Support 00:17:32.440 ================================ 00:17:32.440 Supported: No 00:17:32.440 00:17:32.440 Admin Command Set Attributes 00:17:32.440 ============================ 00:17:32.440 Security Send/Receive: Not Supported 00:17:32.440 Format NVM: Not Supported 00:17:32.440 Firmware Activate/Download: Not Supported 00:17:32.440 Namespace Management: Not Supported 00:17:32.440 Device Self-Test: Not Supported 00:17:32.440 Directives: Not Supported 00:17:32.440 NVMe-MI: Not Supported 00:17:32.440 Virtualization Management: Not Supported 00:17:32.440 Doorbell Buffer Config: Not Supported 00:17:32.440 Get LBA Status Capability: Not Supported 00:17:32.440 Command & Feature Lockdown Capability: Not Supported 00:17:32.440 Abort Command Limit: 4 00:17:32.440 Async Event Request Limit: 4 00:17:32.440 Number of Firmware Slots: N/A 00:17:32.440 Firmware Slot 1 Read-Only: N/A 00:17:32.440 Firmware Activation Without Reset: N/A 00:17:32.440 Multiple Update Detection Support: N/A 00:17:32.440 Firmware Update Granularity: No Information Provided 00:17:32.440 Per-Namespace SMART Log: No 00:17:32.440 Asymmetric Namespace Access Log Page: Not Supported 00:17:32.440 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:32.440 Command Effects Log Page: Supported 00:17:32.440 Get Log Page Extended Data: Supported 00:17:32.440 Telemetry Log Pages: Not Supported 00:17:32.440 Persistent Event Log Pages: Not Supported 00:17:32.440 Supported Log Pages Log Page: May Support 00:17:32.440 Commands Supported & Effects Log Page: Not Supported 00:17:32.440 Feature Identifiers & Effects Log Page:May Support 00:17:32.440 NVMe-MI Commands & Effects Log Page: May Support 00:17:32.440 Data Area 4 for Telemetry Log: Not Supported 00:17:32.440 Error Log Page Entries Supported: 128 00:17:32.440 Keep Alive: Supported 00:17:32.440 Keep Alive Granularity: 10000 ms 00:17:32.440 00:17:32.440 NVM Command Set Attributes 00:17:32.440 ========================== 00:17:32.440 Submission Queue Entry Size 00:17:32.440 Max: 64 00:17:32.440 Min: 64 00:17:32.440 Completion Queue Entry Size 00:17:32.440 Max: 16 00:17:32.440 Min: 16 00:17:32.440 Number of Namespaces: 32 00:17:32.440 Compare Command: Supported 00:17:32.440 Write Uncorrectable Command: Not Supported 00:17:32.440 Dataset Management Command: Supported 00:17:32.440 Write Zeroes Command: Supported 00:17:32.440 Set Features Save Field: Not Supported 00:17:32.440 Reservations: Supported 00:17:32.440 Timestamp: Not Supported 00:17:32.440 Copy: Supported 00:17:32.440 Volatile Write Cache: Present 00:17:32.440 Atomic Write Unit (Normal): 1 00:17:32.440 Atomic Write Unit (PFail): 1 00:17:32.440 Atomic Compare & Write Unit: 1 00:17:32.440 Fused Compare & Write: Supported 00:17:32.440 Scatter-Gather List 00:17:32.440 SGL Command Set: Supported 00:17:32.440 SGL Keyed: Supported 00:17:32.440 SGL Bit Bucket Descriptor: Not Supported 00:17:32.440 SGL Metadata Pointer: Not Supported 00:17:32.440 Oversized SGL: Not Supported 00:17:32.440 SGL Metadata Address: Not Supported 00:17:32.440 SGL Offset: Supported 00:17:32.440 Transport SGL Data Block: Not Supported 00:17:32.440 Replay Protected Memory Block: Not Supported 00:17:32.440 00:17:32.440 Firmware Slot Information 00:17:32.440 ========================= 00:17:32.440 Active slot: 1 00:17:32.440 Slot 1 Firmware Revision: 24.05 00:17:32.440 00:17:32.440 00:17:32.440 Commands Supported and Effects 00:17:32.440 ============================== 00:17:32.440 Admin Commands 00:17:32.440 -------------- 00:17:32.440 Get Log Page (02h): Supported 00:17:32.440 Identify (06h): Supported 00:17:32.440 Abort (08h): Supported 00:17:32.441 Set Features (09h): Supported 00:17:32.441 Get Features (0Ah): Supported 00:17:32.441 Asynchronous Event Request (0Ch): Supported 00:17:32.441 Keep Alive (18h): Supported 00:17:32.441 I/O Commands 00:17:32.441 ------------ 00:17:32.441 Flush (00h): Supported LBA-Change 00:17:32.441 Write (01h): Supported LBA-Change 00:17:32.441 Read (02h): Supported 00:17:32.441 Compare (05h): Supported 00:17:32.441 Write Zeroes (08h): Supported LBA-Change 00:17:32.441 Dataset Management (09h): Supported LBA-Change 00:17:32.441 Copy (19h): Supported LBA-Change 00:17:32.441 Unknown (79h): Supported LBA-Change 00:17:32.441 Unknown (7Ah): Supported 00:17:32.441 00:17:32.441 Error Log 00:17:32.441 ========= 00:17:32.441 00:17:32.441 Arbitration 00:17:32.441 =========== 00:17:32.441 Arbitration Burst: 1 00:17:32.441 00:17:32.441 Power Management 00:17:32.441 ================ 00:17:32.441 Number of Power States: 1 00:17:32.441 Current Power State: Power State #0 00:17:32.441 Power State #0: 00:17:32.441 Max Power: 0.00 W 00:17:32.441 Non-Operational State: Operational 00:17:32.441 Entry Latency: Not Reported 00:17:32.441 Exit Latency: Not Reported 00:17:32.441 Relative Read Throughput: 0 00:17:32.441 Relative Read Latency: 0 00:17:32.441 Relative Write Throughput: 0 00:17:32.441 Relative Write Latency: 0 00:17:32.441 Idle Power: Not Reported 00:17:32.441 Active Power: Not Reported 00:17:32.441 Non-Operational Permissive Mode: Not Supported 00:17:32.441 00:17:32.441 Health Information 00:17:32.441 ================== 00:17:32.441 Critical Warnings: 00:17:32.441 Available Spare Space: OK 00:17:32.441 Temperature: OK 00:17:32.441 Device Reliability: OK 00:17:32.441 Read Only: No 00:17:32.441 Volatile Memory Backup: OK 00:17:32.441 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:32.441 Temperature Threshold: [2024-05-15 10:57:48.475312] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.475324] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d34c80) 00:17:32.441 [2024-05-15 10:57:48.475335] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.441 [2024-05-15 10:57:48.475358] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d947e0, cid 7, qid 0 00:17:32.441 [2024-05-15 10:57:48.475591] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.441 [2024-05-15 10:57:48.475607] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.441 [2024-05-15 10:57:48.475614] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.475620] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d947e0) on tqpair=0x1d34c80 00:17:32.441 [2024-05-15 10:57:48.475661] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:32.441 [2024-05-15 10:57:48.475683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.441 [2024-05-15 10:57:48.475694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.441 [2024-05-15 10:57:48.475704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.441 [2024-05-15 10:57:48.475713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:32.441 [2024-05-15 10:57:48.475725] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.475733] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.475740] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34c80) 00:17:32.441 [2024-05-15 10:57:48.475750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.441 [2024-05-15 10:57:48.475788] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94260, cid 3, qid 0 00:17:32.441 [2024-05-15 10:57:48.476067] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.441 [2024-05-15 10:57:48.476083] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.441 [2024-05-15 10:57:48.476090] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476097] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94260) on tqpair=0x1d34c80 00:17:32.441 [2024-05-15 10:57:48.476113] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476122] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476129] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34c80) 00:17:32.441 [2024-05-15 10:57:48.476139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.441 [2024-05-15 10:57:48.476165] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94260, cid 3, qid 0 00:17:32.441 [2024-05-15 10:57:48.476413] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.441 [2024-05-15 10:57:48.476428] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.441 [2024-05-15 10:57:48.476435] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476442] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94260) on tqpair=0x1d34c80 00:17:32.441 [2024-05-15 10:57:48.476451] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:32.441 [2024-05-15 10:57:48.476458] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:32.441 [2024-05-15 10:57:48.476474] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476483] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476490] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34c80) 00:17:32.441 [2024-05-15 10:57:48.476500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.441 [2024-05-15 10:57:48.476520] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94260, cid 3, qid 0 00:17:32.441 [2024-05-15 10:57:48.476747] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.441 [2024-05-15 10:57:48.476762] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.441 [2024-05-15 10:57:48.476769] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476775] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94260) on tqpair=0x1d34c80 00:17:32.441 [2024-05-15 10:57:48.476793] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476802] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.476809] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34c80) 00:17:32.441 [2024-05-15 10:57:48.476819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.441 [2024-05-15 10:57:48.476840] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94260, cid 3, qid 0 00:17:32.441 [2024-05-15 10:57:48.477071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.441 [2024-05-15 10:57:48.477086] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.441 [2024-05-15 10:57:48.477093] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.477100] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94260) on tqpair=0x1d34c80 00:17:32.441 [2024-05-15 10:57:48.477117] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.477126] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.477133] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34c80) 00:17:32.441 [2024-05-15 10:57:48.477143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.441 [2024-05-15 10:57:48.477164] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94260, cid 3, qid 0 00:17:32.441 [2024-05-15 10:57:48.477389] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.441 [2024-05-15 10:57:48.477404] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.441 [2024-05-15 10:57:48.477414] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.477421] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94260) on tqpair=0x1d34c80 00:17:32.441 [2024-05-15 10:57:48.477439] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.441 [2024-05-15 10:57:48.477449] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.442 [2024-05-15 10:57:48.477455] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34c80) 00:17:32.442 [2024-05-15 10:57:48.477465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.442 [2024-05-15 10:57:48.477486] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94260, cid 3, qid 0 00:17:32.442 [2024-05-15 10:57:48.480940] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.442 [2024-05-15 10:57:48.480956] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.442 [2024-05-15 10:57:48.480963] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.442 [2024-05-15 10:57:48.480969] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94260) on tqpair=0x1d34c80 00:17:32.442 [2024-05-15 10:57:48.480988] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:32.442 [2024-05-15 10:57:48.480997] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:32.442 [2024-05-15 10:57:48.481004] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d34c80) 00:17:32.442 [2024-05-15 10:57:48.481014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.442 [2024-05-15 10:57:48.481035] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d94260, cid 3, qid 0 00:17:32.442 [2024-05-15 10:57:48.481274] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:32.442 [2024-05-15 10:57:48.481289] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:32.442 [2024-05-15 10:57:48.481296] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:32.442 [2024-05-15 10:57:48.481303] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1d94260) on tqpair=0x1d34c80 00:17:32.442 [2024-05-15 10:57:48.481317] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:17:32.442 0 Kelvin (-273 Celsius) 00:17:32.442 Available Spare: 0% 00:17:32.442 Available Spare Threshold: 0% 00:17:32.442 Life Percentage Used: 0% 00:17:32.442 Data Units Read: 0 00:17:32.442 Data Units Written: 0 00:17:32.442 Host Read Commands: 0 00:17:32.442 Host Write Commands: 0 00:17:32.442 Controller Busy Time: 0 minutes 00:17:32.442 Power Cycles: 0 00:17:32.442 Power On Hours: 0 hours 00:17:32.442 Unsafe Shutdowns: 0 00:17:32.442 Unrecoverable Media Errors: 0 00:17:32.442 Lifetime Error Log Entries: 0 00:17:32.442 Warning Temperature Time: 0 minutes 00:17:32.442 Critical Temperature Time: 0 minutes 00:17:32.442 00:17:32.442 Number of Queues 00:17:32.442 ================ 00:17:32.442 Number of I/O Submission Queues: 127 00:17:32.442 Number of I/O Completion Queues: 127 00:17:32.442 00:17:32.442 Active Namespaces 00:17:32.442 ================= 00:17:32.442 Namespace ID:1 00:17:32.442 Error Recovery Timeout: Unlimited 00:17:32.442 Command Set Identifier: NVM (00h) 00:17:32.442 Deallocate: Supported 00:17:32.442 Deallocated/Unwritten Error: Not Supported 00:17:32.442 Deallocated Read Value: Unknown 00:17:32.442 Deallocate in Write Zeroes: Not Supported 00:17:32.442 Deallocated Guard Field: 0xFFFF 00:17:32.442 Flush: Supported 00:17:32.442 Reservation: Supported 00:17:32.442 Namespace Sharing Capabilities: Multiple Controllers 00:17:32.442 Size (in LBAs): 131072 (0GiB) 00:17:32.442 Capacity (in LBAs): 131072 (0GiB) 00:17:32.442 Utilization (in LBAs): 131072 (0GiB) 00:17:32.442 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:32.442 EUI64: ABCDEF0123456789 00:17:32.442 UUID: 0992c90c-ce7a-44da-827c-3ffad3ddf834 00:17:32.442 Thin Provisioning: Not Supported 00:17:32.442 Per-NS Atomic Units: Yes 00:17:32.442 Atomic Boundary Size (Normal): 0 00:17:32.442 Atomic Boundary Size (PFail): 0 00:17:32.442 Atomic Boundary Offset: 0 00:17:32.442 Maximum Single Source Range Length: 65535 00:17:32.442 Maximum Copy Length: 65535 00:17:32.442 Maximum Source Range Count: 1 00:17:32.442 NGUID/EUI64 Never Reused: No 00:17:32.442 Namespace Write Protected: No 00:17:32.442 Number of LBA Formats: 1 00:17:32.442 Current LBA Format: LBA Format #00 00:17:32.442 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:32.442 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:32.442 rmmod nvme_tcp 00:17:32.442 rmmod nvme_fabrics 00:17:32.442 rmmod nvme_keyring 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2835545 ']' 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2835545 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@946 -- # '[' -z 2835545 ']' 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@950 -- # kill -0 2835545 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # uname 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2835545 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2835545' 00:17:32.442 killing process with pid 2835545 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # kill 2835545 00:17:32.442 [2024-05-15 10:57:48.597092] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:32.442 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@970 -- # wait 2835545 00:17:32.703 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:32.703 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:32.703 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:32.703 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:32.703 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:32.703 10:57:48 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:32.703 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:32.703 10:57:48 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.290 10:57:50 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:35.290 00:17:35.290 real 0m6.624s 00:17:35.290 user 0m7.209s 00:17:35.290 sys 0m2.325s 00:17:35.290 10:57:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:35.290 10:57:50 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:35.290 ************************************ 00:17:35.291 END TEST nvmf_identify 00:17:35.291 ************************************ 00:17:35.291 10:57:50 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:35.291 10:57:50 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:35.291 10:57:50 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:35.291 10:57:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:35.291 ************************************ 00:17:35.291 START TEST nvmf_perf 00:17:35.291 ************************************ 00:17:35.291 10:57:50 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:35.291 * Looking for test storage... 00:17:35.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:17:35.291 10:57:51 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:37.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:37.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:37.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:37.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:37.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:37.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:17:37.824 00:17:37.824 --- 10.0.0.2 ping statistics --- 00:17:37.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.824 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:17:37.824 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:37.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:37.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:17:37.825 00:17:37.825 --- 10.0.0.1 ping statistics --- 00:17:37.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:37.825 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@720 -- # xtrace_disable 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2838050 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2838050 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@827 -- # '[' -z 2838050 ']' 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local max_retries=100 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # xtrace_disable 00:17:37.825 10:57:53 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:37.825 [2024-05-15 10:57:53.657976] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:17:37.825 [2024-05-15 10:57:53.658055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:37.825 EAL: No free 2048 kB hugepages reported on node 1 00:17:37.825 [2024-05-15 10:57:53.739148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.825 [2024-05-15 10:57:53.859650] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:37.825 [2024-05-15 10:57:53.859712] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:37.825 [2024-05-15 10:57:53.859727] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:37.825 [2024-05-15 10:57:53.859738] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:37.825 [2024-05-15 10:57:53.859748] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:37.825 [2024-05-15 10:57:53.862951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.825 [2024-05-15 10:57:53.863018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.825 [2024-05-15 10:57:53.863085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.825 [2024-05-15 10:57:53.863089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.759 10:57:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:17:38.759 10:57:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@860 -- # return 0 00:17:38.759 10:57:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:38.759 10:57:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.759 10:57:54 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:38.759 10:57:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.759 10:57:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:17:38.759 10:57:54 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:17:42.038 10:57:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:17:42.038 10:57:57 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:42.038 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:17:42.038 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:42.295 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:17:42.295 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:17:42.295 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:17:42.295 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:17:42.295 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:42.553 [2024-05-15 10:57:58.592643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.553 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:42.809 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:42.809 10:57:58 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.066 10:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:17:43.066 10:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:17:43.324 10:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.581 [2024-05-15 10:57:59.588071] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:43.581 [2024-05-15 10:57:59.588376] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.581 10:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:43.838 10:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:17:43.838 10:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:43.838 10:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:17:43.838 10:57:59 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:17:45.208 Initializing NVMe Controllers 00:17:45.208 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:17:45.208 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:17:45.208 Initialization complete. Launching workers. 00:17:45.208 ======================================================== 00:17:45.208 Latency(us) 00:17:45.208 Device Information : IOPS MiB/s Average min max 00:17:45.208 PCIE (0000:88:00.0) NSID 1 from core 0: 85030.92 332.15 375.85 37.39 4772.58 00:17:45.208 ======================================================== 00:17:45.208 Total : 85030.92 332.15 375.85 37.39 4772.58 00:17:45.208 00:17:45.208 10:58:01 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:45.208 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.581 Initializing NVMe Controllers 00:17:46.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:46.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:46.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:46.581 Initialization complete. Launching workers. 00:17:46.581 ======================================================== 00:17:46.581 Latency(us) 00:17:46.581 Device Information : IOPS MiB/s Average min max 00:17:46.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 88.78 0.35 11622.47 317.32 45685.26 00:17:46.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 50.87 0.20 19657.07 6970.36 47944.56 00:17:46.581 ======================================================== 00:17:46.581 Total : 139.65 0.55 14549.36 317.32 47944.56 00:17:46.581 00:17:46.581 10:58:02 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:46.581 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.952 Initializing NVMe Controllers 00:17:47.952 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:47.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:47.952 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:47.952 Initialization complete. Launching workers. 00:17:47.952 ======================================================== 00:17:47.952 Latency(us) 00:17:47.952 Device Information : IOPS MiB/s Average min max 00:17:47.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7833.49 30.60 4099.91 686.56 8121.43 00:17:47.952 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3907.75 15.26 8226.35 6018.32 15793.08 00:17:47.952 ======================================================== 00:17:47.952 Total : 11741.24 45.86 5473.28 686.56 15793.08 00:17:47.952 00:17:48.209 10:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:17:48.209 10:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:17:48.209 10:58:04 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:17:48.209 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.737 Initializing NVMe Controllers 00:17:50.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.737 Controller IO queue size 128, less than required. 00:17:50.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.737 Controller IO queue size 128, less than required. 00:17:50.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:50.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:50.738 Initialization complete. Launching workers. 00:17:50.738 ======================================================== 00:17:50.738 Latency(us) 00:17:50.738 Device Information : IOPS MiB/s Average min max 00:17:50.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 513.62 128.41 267071.06 127675.27 416493.95 00:17:50.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 466.70 116.68 292411.56 109447.89 466185.26 00:17:50.738 ======================================================== 00:17:50.738 Total : 980.32 245.08 279134.90 109447.89 466185.26 00:17:50.738 00:17:50.738 10:58:06 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:17:50.738 EAL: No free 2048 kB hugepages reported on node 1 00:17:50.996 No valid NVMe controllers or AIO or URING devices found 00:17:50.996 Initializing NVMe Controllers 00:17:50.996 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:50.996 Controller IO queue size 128, less than required. 00:17:50.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.996 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:17:50.996 Controller IO queue size 128, less than required. 00:17:50.996 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:50.996 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:17:50.996 WARNING: Some requested NVMe devices were skipped 00:17:50.996 10:58:07 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:17:50.996 EAL: No free 2048 kB hugepages reported on node 1 00:17:53.600 Initializing NVMe Controllers 00:17:53.600 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:53.600 Controller IO queue size 128, less than required. 00:17:53.600 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:53.600 Controller IO queue size 128, less than required. 00:17:53.600 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:53.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:53.600 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:17:53.600 Initialization complete. Launching workers. 00:17:53.600 00:17:53.600 ==================== 00:17:53.600 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:17:53.600 TCP transport: 00:17:53.600 polls: 36723 00:17:53.600 idle_polls: 10743 00:17:53.600 sock_completions: 25980 00:17:53.600 nvme_completions: 3145 00:17:53.600 submitted_requests: 4772 00:17:53.600 queued_requests: 1 00:17:53.600 00:17:53.600 ==================== 00:17:53.600 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:17:53.600 TCP transport: 00:17:53.600 polls: 39959 00:17:53.600 idle_polls: 14735 00:17:53.600 sock_completions: 25224 00:17:53.600 nvme_completions: 3103 00:17:53.600 submitted_requests: 4682 00:17:53.600 queued_requests: 1 00:17:53.600 ======================================================== 00:17:53.600 Latency(us) 00:17:53.601 Device Information : IOPS MiB/s Average min max 00:17:53.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 786.00 196.50 168889.95 87239.41 228888.47 00:17:53.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 775.50 193.87 169127.41 68805.12 228953.62 00:17:53.601 ======================================================== 00:17:53.601 Total : 1561.50 390.37 169007.88 68805.12 228953.62 00:17:53.601 00:17:53.601 10:58:09 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:17:53.601 10:58:09 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:53.859 rmmod nvme_tcp 00:17:53.859 rmmod nvme_fabrics 00:17:53.859 rmmod nvme_keyring 00:17:53.859 10:58:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2838050 ']' 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2838050 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@946 -- # '[' -z 2838050 ']' 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@950 -- # kill -0 2838050 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # uname 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2838050 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2838050' 00:17:53.859 killing process with pid 2838050 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # kill 2838050 00:17:53.859 [2024-05-15 10:58:10.033509] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:53.859 10:58:10 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@970 -- # wait 2838050 00:17:55.761 10:58:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:55.761 10:58:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:55.761 10:58:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:55.761 10:58:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:55.761 10:58:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:55.761 10:58:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.761 10:58:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.761 10:58:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.664 10:58:13 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:57.664 00:17:57.664 real 0m22.752s 00:17:57.664 user 1m10.457s 00:17:57.664 sys 0m5.367s 00:17:57.664 10:58:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:17:57.664 10:58:13 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:57.664 ************************************ 00:17:57.664 END TEST nvmf_perf 00:17:57.664 ************************************ 00:17:57.664 10:58:13 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:57.664 10:58:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:17:57.664 10:58:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:17:57.664 10:58:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.664 ************************************ 00:17:57.664 START TEST nvmf_fio_host 00:17:57.664 ************************************ 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:17:57.664 * Looking for test storage... 00:17:57.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:17:57.664 10:58:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:00.195 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:00.195 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:00.195 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:00.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:00.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:00.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:00.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:18:00.196 00:18:00.196 --- 10.0.0.2 ping statistics --- 00:18:00.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.196 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:00.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:00.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:18:00.196 00:18:00.196 --- 10.0.0.1 ping statistics --- 00:18:00.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:00.196 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=2842439 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 2842439 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@827 -- # '[' -z 2842439 ']' 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:00.196 10:58:16 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:00.196 [2024-05-15 10:58:16.419816] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:00.196 [2024-05-15 10:58:16.419908] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.455 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.455 [2024-05-15 10:58:16.496800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.455 [2024-05-15 10:58:16.609032] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.455 [2024-05-15 10:58:16.609092] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.455 [2024-05-15 10:58:16.609121] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:00.455 [2024-05-15 10:58:16.609133] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:00.455 [2024-05-15 10:58:16.609150] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.455 [2024-05-15 10:58:16.609228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.455 [2024-05-15 10:58:16.609316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.455 [2024-05-15 10:58:16.609389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.455 [2024-05-15 10:58:16.609392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@860 -- # return 0 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.391 [2024-05-15 10:58:17.389789] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.391 Malloc1 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.391 [2024-05-15 10:58:17.460557] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:01.391 [2024-05-15 10:58:17.460842] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:01.391 10:58:17 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:18:01.649 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:01.649 fio-3.35 00:18:01.649 Starting 1 thread 00:18:01.649 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.178 00:18:04.178 test: (groupid=0, jobs=1): err= 0: pid=2842670: Wed May 15 10:58:20 2024 00:18:04.178 read: IOPS=7716, BW=30.1MiB/s (31.6MB/s)(60.5MiB/2006msec) 00:18:04.178 slat (nsec): min=1942, max=172838, avg=2581.66, stdev=2162.15 00:18:04.178 clat (usec): min=3881, max=15170, avg=9162.68, stdev=723.83 00:18:04.178 lat (usec): min=3899, max=15173, avg=9165.27, stdev=723.74 00:18:04.178 clat percentiles (usec): 00:18:04.178 | 1.00th=[ 7635], 5.00th=[ 8094], 10.00th=[ 8291], 20.00th=[ 8586], 00:18:04.178 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:18:04.178 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10028], 95.00th=[10290], 00:18:04.178 | 99.00th=[10814], 99.50th=[11076], 99.90th=[13698], 99.95th=[14615], 00:18:04.178 | 99.99th=[15139] 00:18:04.178 bw ( KiB/s): min=29184, max=31552, per=99.89%, avg=30832.00, stdev=1106.84, samples=4 00:18:04.178 iops : min= 7296, max= 7888, avg=7708.00, stdev=276.71, samples=4 00:18:04.178 write: IOPS=7709, BW=30.1MiB/s (31.6MB/s)(60.4MiB/2006msec); 0 zone resets 00:18:04.178 slat (nsec): min=2088, max=90081, avg=2764.87, stdev=1562.90 00:18:04.178 clat (usec): min=1649, max=14495, avg=7320.01, stdev=658.41 00:18:04.178 lat (usec): min=1655, max=14497, avg=7322.77, stdev=658.37 00:18:04.178 clat percentiles (usec): 00:18:04.178 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6587], 20.00th=[ 6849], 00:18:04.178 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7439], 00:18:04.178 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:18:04.178 | 99.00th=[ 8717], 99.50th=[ 8979], 99.90th=[11863], 99.95th=[13435], 00:18:04.178 | 99.99th=[14484] 00:18:04.178 bw ( KiB/s): min=30296, max=31296, per=99.89%, avg=30806.00, stdev=409.25, samples=4 00:18:04.178 iops : min= 7574, max= 7824, avg=7701.50, stdev=102.31, samples=4 00:18:04.178 lat (msec) : 2=0.01%, 4=0.10%, 10=94.25%, 20=5.64% 00:18:04.178 cpu : usr=48.33%, sys=40.85%, ctx=61, majf=0, minf=5 00:18:04.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:04.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:04.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:04.178 issued rwts: total=15480,15466,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:04.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:04.178 00:18:04.178 Run status group 0 (all jobs): 00:18:04.178 READ: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.5MiB (63.4MB), run=2006-2006msec 00:18:04.178 WRITE: bw=30.1MiB/s (31.6MB/s), 30.1MiB/s-30.1MiB/s (31.6MB/s-31.6MB/s), io=60.4MiB (63.3MB), run=2006-2006msec 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local sanitizers 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # shift 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local asan_lib= 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libasan 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # asan_lib= 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:18:04.178 10:58:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:18:04.178 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:04.178 fio-3.35 00:18:04.178 Starting 1 thread 00:18:04.178 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.709 00:18:06.709 test: (groupid=0, jobs=1): err= 0: pid=2843121: Wed May 15 10:58:22 2024 00:18:06.709 read: IOPS=7571, BW=118MiB/s (124MB/s)(237MiB/2007msec) 00:18:06.709 slat (usec): min=2, max=124, avg= 3.63, stdev= 1.65 00:18:06.709 clat (usec): min=4246, max=20755, avg=10440.25, stdev=2548.86 00:18:06.709 lat (usec): min=4250, max=20758, avg=10443.88, stdev=2548.97 00:18:06.709 clat percentiles (usec): 00:18:06.709 | 1.00th=[ 5473], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 8356], 00:18:06.709 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[10814], 00:18:06.709 | 70.00th=[11600], 80.00th=[12518], 90.00th=[13698], 95.00th=[14877], 00:18:06.709 | 99.00th=[17695], 99.50th=[18482], 99.90th=[20055], 99.95th=[20317], 00:18:06.709 | 99.99th=[20579] 00:18:06.709 bw ( KiB/s): min=52576, max=67936, per=50.29%, avg=60920.00, stdev=7930.72, samples=4 00:18:06.709 iops : min= 3286, max= 4246, avg=3807.50, stdev=495.67, samples=4 00:18:06.709 write: IOPS=4358, BW=68.1MiB/s (71.4MB/s)(124MiB/1823msec); 0 zone resets 00:18:06.709 slat (usec): min=30, max=129, avg=33.06, stdev= 3.95 00:18:06.709 clat (usec): min=5083, max=21018, avg=11464.80, stdev=1889.91 00:18:06.709 lat (usec): min=5119, max=21049, avg=11497.87, stdev=1890.26 00:18:06.709 clat percentiles (usec): 00:18:06.709 | 1.00th=[ 7963], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:18:06.709 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:18:06.709 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14091], 95.00th=[14877], 00:18:06.709 | 99.00th=[16188], 99.50th=[17171], 99.90th=[17957], 99.95th=[18220], 00:18:06.709 | 99.99th=[21103] 00:18:06.709 bw ( KiB/s): min=53440, max=71552, per=90.58%, avg=63160.00, stdev=9200.82, samples=4 00:18:06.709 iops : min= 3340, max= 4472, avg=3947.50, stdev=575.05, samples=4 00:18:06.709 lat (msec) : 10=37.10%, 20=62.83%, 50=0.07% 00:18:06.709 cpu : usr=74.13%, sys=21.49%, ctx=22, majf=0, minf=1 00:18:06.709 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:18:06.709 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:06.709 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:06.709 issued rwts: total=15196,7945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:06.709 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:06.709 00:18:06.709 Run status group 0 (all jobs): 00:18:06.709 READ: bw=118MiB/s (124MB/s), 118MiB/s-118MiB/s (124MB/s-124MB/s), io=237MiB (249MB), run=2007-2007msec 00:18:06.709 WRITE: bw=68.1MiB/s (71.4MB/s), 68.1MiB/s-68.1MiB/s (71.4MB/s-71.4MB/s), io=124MiB (130MB), run=1823-1823msec 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.709 rmmod nvme_tcp 00:18:06.709 rmmod nvme_fabrics 00:18:06.709 rmmod nvme_keyring 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2842439 ']' 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2842439 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@946 -- # '[' -z 2842439 ']' 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@950 -- # kill -0 2842439 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # uname 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2842439 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2842439' 00:18:06.709 killing process with pid 2842439 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # kill 2842439 00:18:06.709 [2024-05-15 10:58:22.830339] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:06.709 10:58:22 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@970 -- # wait 2842439 00:18:06.968 10:58:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.968 10:58:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.968 10:58:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.968 10:58:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.968 10:58:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.968 10:58:23 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.968 10:58:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.968 10:58:23 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.501 10:58:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.501 00:18:09.501 real 0m11.390s 00:18:09.501 user 0m29.485s 00:18:09.501 sys 0m4.342s 00:18:09.501 10:58:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:09.501 10:58:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:09.501 ************************************ 00:18:09.501 END TEST nvmf_fio_host 00:18:09.501 ************************************ 00:18:09.501 10:58:25 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:09.501 10:58:25 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:09.501 10:58:25 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:09.501 10:58:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.501 ************************************ 00:18:09.501 START TEST nvmf_failover 00:18:09.501 ************************************ 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:18:09.501 * Looking for test storage... 00:18:09.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.501 10:58:25 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.502 10:58:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:18:11.436 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.694 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:11.695 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.695 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:18:11.695 00:18:11.695 --- 10.0.0.2 ping statistics --- 00:18:11.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.695 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.695 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.695 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:18:11.695 00:18:11.695 --- 10.0.0.1 ping statistics --- 00:18:11.695 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.695 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2845613 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2845613 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2845613 ']' 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:11.695 10:58:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:11.695 [2024-05-15 10:58:27.890383] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:11.695 [2024-05-15 10:58:27.890489] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.954 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.954 [2024-05-15 10:58:27.973272] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:11.954 [2024-05-15 10:58:28.089011] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.954 [2024-05-15 10:58:28.089076] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.954 [2024-05-15 10:58:28.089103] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.954 [2024-05-15 10:58:28.089117] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.954 [2024-05-15 10:58:28.089128] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.954 [2024-05-15 10:58:28.089232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.954 [2024-05-15 10:58:28.089354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:11.954 [2024-05-15 10:58:28.089357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.886 10:58:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:12.886 10:58:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:12.886 10:58:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.886 10:58:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:12.886 10:58:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:12.886 10:58:28 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.886 10:58:28 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:12.886 [2024-05-15 10:58:29.059198] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.886 10:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:13.143 Malloc0 00:18:13.143 10:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.401 10:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.658 10:58:29 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.916 [2024-05-15 10:58:30.055945] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:13.916 [2024-05-15 10:58:30.056281] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.916 10:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:14.173 [2024-05-15 10:58:30.300853] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:14.173 10:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:14.452 [2024-05-15 10:58:30.541687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2845906 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2845906 /var/tmp/bdevperf.sock 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2845906 ']' 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:14.452 10:58:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:14.710 10:58:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:14.710 10:58:30 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:14.710 10:58:30 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:14.968 NVMe0n1 00:18:15.225 10:58:31 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:15.483 00:18:15.483 10:58:31 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2846038 00:18:15.483 10:58:31 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:15.483 10:58:31 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:18:16.417 10:58:32 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.676 [2024-05-15 10:58:32.858998] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859103] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859153] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859166] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859179] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859231] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859266] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859300] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859313] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859325] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859336] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859347] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859359] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859371] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859383] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859394] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859406] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859418] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859429] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859453] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859486] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859520] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859546] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859558] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859571] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859583] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859634] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859649] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859662] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859675] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859687] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859699] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859711] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859724] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859736] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859748] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859760] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859772] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.676 [2024-05-15 10:58:32.859784] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859797] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859810] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859822] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859835] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859847] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859860] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859872] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859884] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 [2024-05-15 10:58:32.859970] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1589bf0 is same with the state(5) to be set 00:18:16.677 10:58:32 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:18:19.962 10:58:35 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:20.220 00:18:20.220 10:58:36 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:20.479 [2024-05-15 10:58:36.574448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574531] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574556] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574568] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574581] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574593] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574605] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574617] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574630] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574642] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574655] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574668] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.479 [2024-05-15 10:58:36.574694] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574706] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574720] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574734] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574746] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574761] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574774] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574787] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574800] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574812] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574824] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574846] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574871] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574896] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574907] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574919] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574941] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574956] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574981] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.574993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575005] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575018] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575030] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575042] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575054] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575066] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575078] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575090] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575102] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575114] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575126] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575138] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575163] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575175] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575191] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575203] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575216] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575228] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575256] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575268] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575280] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575292] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575327] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575340] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575351] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575364] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575377] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575401] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575436] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575448] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575459] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575471] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575483] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575495] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575518] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575537] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575549] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575561] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575573] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575585] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575597] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575609] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575621] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575633] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575644] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575657] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575693] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575705] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575717] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575728] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575740] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575752] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575764] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575791] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575802] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575814] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575825] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575837] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575848] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575859] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575873] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575885] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575897] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575909] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575943] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575957] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575969] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.575994] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576006] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576019] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576031] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576055] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576067] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576080] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576104] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 [2024-05-15 10:58:36.576118] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x158a420 is same with the state(5) to be set 00:18:20.480 10:58:36 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:18:23.764 10:58:39 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:23.764 [2024-05-15 10:58:39.843939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.764 10:58:39 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:18:24.699 10:58:40 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:24.959 [2024-05-15 10:58:41.096883] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.096982] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097003] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097062] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097075] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097088] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097100] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097124] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097136] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097150] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097162] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097174] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097187] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097199] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097224] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097236] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097250] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097264] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097276] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097290] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097343] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097355] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097366] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097388] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097400] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097412] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097441] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097452] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097464] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097475] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097487] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097509] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097521] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097532] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097544] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097555] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097566] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097578] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097589] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097600] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097612] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097623] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097635] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097646] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097658] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097669] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097681] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097692] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 [2024-05-15 10:58:41.097704] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132fef0 is same with the state(5) to be set 00:18:24.959 10:58:41 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2846038 00:18:31.554 0 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2845906 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2845906 ']' 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2845906 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2845906 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2845906' 00:18:31.554 killing process with pid 2845906 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2845906 00:18:31.554 10:58:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2845906 00:18:31.554 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:31.554 [2024-05-15 10:58:30.603530] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:31.554 [2024-05-15 10:58:30.603624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2845906 ] 00:18:31.554 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.554 [2024-05-15 10:58:30.677723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.554 [2024-05-15 10:58:30.792555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.554 Running I/O for 15 seconds... 00:18:31.554 [2024-05-15 10:58:32.861609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.554 [2024-05-15 10:58:32.861654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.554 [2024-05-15 10:58:32.861691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.554 [2024-05-15 10:58:32.861708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.554 [2024-05-15 10:58:32.861725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.554 [2024-05-15 10:58:32.861739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.861755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.555 [2024-05-15 10:58:32.861770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.861785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:77168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.555 [2024-05-15 10:58:32.861800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.861816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.555 [2024-05-15 10:58:32.861831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.861847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.555 [2024-05-15 10:58:32.861860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.861875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.555 [2024-05-15 10:58:32.861888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.861903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.555 [2024-05-15 10:58:32.861917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.861955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.861973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.861990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.555 [2024-05-15 10:58:32.862503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.862973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.862988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.555 [2024-05-15 10:58:32.863003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.555 [2024-05-15 10:58:32.863018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.556 [2024-05-15 10:58:32.863670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.556 [2024-05-15 10:58:32.863698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.556 [2024-05-15 10:58:32.863727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.556 [2024-05-15 10:58:32.863755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.556 [2024-05-15 10:58:32.863783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.556 [2024-05-15 10:58:32.863811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.556 [2024-05-15 10:58:32.863839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.863981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.863996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.864026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.864055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.864083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.864112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.864141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.864170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.864199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.556 [2024-05-15 10:58:32.864231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.556 [2024-05-15 10:58:32.864259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.557 [2024-05-15 10:58:32.864689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.864734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.864748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.864777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.864788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.864800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.864824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.864835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.864847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.864871] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.864881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.864894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.864918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.864938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.864976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.864992] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865005] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865270] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.557 [2024-05-15 10:58:32.865458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.557 [2024-05-15 10:58:32.865469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:18:31.557 [2024-05-15 10:58:32.865481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.557 [2024-05-15 10:58:32.865494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865507] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865637] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865683] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77272 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865785] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77280 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865834] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77288 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77296 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.865960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.865971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.865983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77304 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.865996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.866009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.866020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.866031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77312 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.866044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.866057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.866068] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.866079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77320 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.866092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.866105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.558 [2024-05-15 10:58:32.866116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.558 [2024-05-15 10:58:32.866128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77328 len:8 PRP1 0x0 PRP2 0x0 00:18:31.558 [2024-05-15 10:58:32.866140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.866201] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1da7170 was disconnected and freed. reset controller. 00:18:31.558 [2024-05-15 10:58:32.866227] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:31.558 [2024-05-15 10:58:32.866263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.558 [2024-05-15 10:58:32.866281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.866297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.558 [2024-05-15 10:58:32.866311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.866325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.558 [2024-05-15 10:58:32.866339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.866353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.558 [2024-05-15 10:58:32.866370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:32.866384] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.558 [2024-05-15 10:58:32.866425] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d882f0 (9): Bad file descriptor 00:18:31.558 [2024-05-15 10:58:32.869717] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.558 [2024-05-15 10:58:32.944294] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:31.558 [2024-05-15 10:58:36.577890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.577957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.577987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.558 [2024-05-15 10:58:36.578338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.558 [2024-05-15 10:58:36.578352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.559 [2024-05-15 10:58:36.578367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.559 [2024-05-15 10:58:36.578397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.559 [2024-05-15 10:58:36.578428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.559 [2024-05-15 10:58:36.578457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.559 [2024-05-15 10:58:36.578486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.559 [2024-05-15 10:58:36.578516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.559 [2024-05-15 10:58:36.578543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:103648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:103696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.578928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.578959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.559 [2024-05-15 10:58:36.579355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.559 [2024-05-15 10:58:36.579368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.579962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.579993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.580024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.560 [2024-05-15 10:58:36.580054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104040 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104048 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580184] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104056 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104064 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104072 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104080 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104088 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104096 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104104 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104112 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.560 [2024-05-15 10:58:36.580589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.560 [2024-05-15 10:58:36.580600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.560 [2024-05-15 10:58:36.580610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104120 len:8 PRP1 0x0 PRP2 0x0 00:18:31.560 [2024-05-15 10:58:36.580622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.580635] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.580649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.580661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104128 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.580673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.580686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.580697] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.580708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104136 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.580720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.580733] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.580744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.580755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104144 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.580767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.580780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.580790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.580802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104152 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.580815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.580828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.580839] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.580850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104160 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.580862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.580876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.580887] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.580898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104168 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.580926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.580949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.580961] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.580973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104176 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.580986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104184 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581053] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104192 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581103] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103600 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581155] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103608 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103616 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103624 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581317] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103632 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103640 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581415] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104200 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581478] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104208 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104216 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581574] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104224 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104232 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104240 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581723] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104248 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104256 len:8 PRP1 0x0 PRP2 0x0 00:18:31.561 [2024-05-15 10:58:36.581796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.561 [2024-05-15 10:58:36.581810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.561 [2024-05-15 10:58:36.581824] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.561 [2024-05-15 10:58:36.581836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104264 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.581850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.581863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.581874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.581885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104272 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.581897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.581925] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.581944] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.581956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104280 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.581970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.581983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.581995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104288 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104296 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104304 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582150] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104312 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104320 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104328 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104336 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104344 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104352 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104360 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582505] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104368 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.582554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.582564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.582575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104376 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.582588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104384 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104392 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598232] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104400 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104408 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104416 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104424 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598431] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104432 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104440 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104448 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.562 [2024-05-15 10:58:36.598562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.562 [2024-05-15 10:58:36.598573] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.562 [2024-05-15 10:58:36.598584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104456 len:8 PRP1 0x0 PRP2 0x0 00:18:31.562 [2024-05-15 10:58:36.598596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:36.598609] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.563 [2024-05-15 10:58:36.598620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.563 [2024-05-15 10:58:36.598631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104464 len:8 PRP1 0x0 PRP2 0x0 00:18:31.563 [2024-05-15 10:58:36.598643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:36.598712] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f51830 was disconnected and freed. reset controller. 00:18:31.563 [2024-05-15 10:58:36.598732] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:18:31.563 [2024-05-15 10:58:36.598786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.563 [2024-05-15 10:58:36.598805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:36.598823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.563 [2024-05-15 10:58:36.598837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:36.598852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.563 [2024-05-15 10:58:36.598865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:36.598879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.563 [2024-05-15 10:58:36.598893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:36.598906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.563 [2024-05-15 10:58:36.598996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d882f0 (9): Bad file descriptor 00:18:31.563 [2024-05-15 10:58:36.602401] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.563 [2024-05-15 10:58:36.767181] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:31.563 [2024-05-15 10:58:41.097927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.097993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:68824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:68856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:68880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:68904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:68912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:68920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:68936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:68952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:68968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.563 [2024-05-15 10:58:41.098715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.563 [2024-05-15 10:58:41.098743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.563 [2024-05-15 10:58:41.098757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.098770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.098784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.098798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.098819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.098833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.098847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.098860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.098875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.098888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.098903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.098916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.098982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:69000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:69008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:69016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:69024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:69032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:69040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:69056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:69064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:69072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:69080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:69088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:69096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:69104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:69112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:69120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.564 [2024-05-15 10:58:41.099953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.099968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.099998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.100018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.100033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.564 [2024-05-15 10:58:41.100048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.564 [2024-05-15 10:58:41.100062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:69128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:69136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:69144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:69152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:69160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:69168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:69176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:69184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:69192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:69200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:69208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:69216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:69224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:69232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:69240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:69248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.565 [2024-05-15 10:58:41.100908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.100977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.100991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:69664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:69704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:69720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.565 [2024-05-15 10:58:41.101282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.565 [2024-05-15 10:58:41.101295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:69768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:69808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:31.566 [2024-05-15 10:58:41.101627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:69256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.566 [2024-05-15 10:58:41.101654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:69264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.566 [2024-05-15 10:58:41.101683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:69272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.566 [2024-05-15 10:58:41.101710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:69280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.566 [2024-05-15 10:58:41.101737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:69288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.566 [2024-05-15 10:58:41.101779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:69296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.566 [2024-05-15 10:58:41.101808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:69304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:31.566 [2024-05-15 10:58:41.101837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101851] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51f30 is same with the state(5) to be set 00:18:31.566 [2024-05-15 10:58:41.101867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:31.566 [2024-05-15 10:58:41.101879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:31.566 [2024-05-15 10:58:41.101894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:69312 len:8 PRP1 0x0 PRP2 0x0 00:18:31.566 [2024-05-15 10:58:41.101908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.101989] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f51f30 was disconnected and freed. reset controller. 00:18:31.566 [2024-05-15 10:58:41.102010] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:18:31.566 [2024-05-15 10:58:41.102043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.566 [2024-05-15 10:58:41.102062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.102077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.566 [2024-05-15 10:58:41.102091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.102105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.566 [2024-05-15 10:58:41.102118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.102132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:31.566 [2024-05-15 10:58:41.102145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:31.566 [2024-05-15 10:58:41.102159] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:31.566 [2024-05-15 10:58:41.105494] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:31.566 [2024-05-15 10:58:41.105535] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d882f0 (9): Bad file descriptor 00:18:31.566 [2024-05-15 10:58:41.180331] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:31.566 00:18:31.566 Latency(us) 00:18:31.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.566 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:31.566 Verification LBA range: start 0x0 length 0x4000 00:18:31.566 NVMe0n1 : 15.01 8823.98 34.47 801.94 0.00 13268.59 1104.40 30680.56 00:18:31.566 =================================================================================================================== 00:18:31.566 Total : 8823.98 34.47 801.94 0.00 13268.59 1104.40 30680.56 00:18:31.566 Received shutdown signal, test time was about 15.000000 seconds 00:18:31.566 00:18:31.566 Latency(us) 00:18:31.566 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.566 =================================================================================================================== 00:18:31.566 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2847889 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2847889 /var/tmp/bdevperf.sock 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@827 -- # '[' -z 2847889 ']' 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:31.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@860 -- # return 0 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:31.566 [2024-05-15 10:58:47.679061] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:31.566 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:18:31.824 [2024-05-15 10:58:47.923805] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:18:31.824 10:58:47 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:32.082 NVMe0n1 00:18:32.082 10:58:48 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:32.647 00:18:32.647 10:58:48 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:32.905 00:18:32.905 10:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:32.905 10:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:18:33.164 10:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:33.422 10:58:49 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:18:36.708 10:58:52 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:36.708 10:58:52 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:18:36.708 10:58:52 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2848557 00:18:36.708 10:58:52 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.708 10:58:52 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2848557 00:18:38.082 0 00:18:38.082 10:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:38.082 [2024-05-15 10:58:47.133133] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:38.082 [2024-05-15 10:58:47.133241] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2847889 ] 00:18:38.082 EAL: No free 2048 kB hugepages reported on node 1 00:18:38.082 [2024-05-15 10:58:47.203867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.082 [2024-05-15 10:58:47.309515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.082 [2024-05-15 10:58:49.589447] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:18:38.082 [2024-05-15 10:58:49.589539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.082 [2024-05-15 10:58:49.589563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.082 [2024-05-15 10:58:49.589580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.082 [2024-05-15 10:58:49.589608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.082 [2024-05-15 10:58:49.589622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.082 [2024-05-15 10:58:49.589635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.082 [2024-05-15 10:58:49.589649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.082 [2024-05-15 10:58:49.589664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.082 [2024-05-15 10:58:49.589678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:38.082 [2024-05-15 10:58:49.589716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.082 [2024-05-15 10:58:49.589746] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207f2f0 (9): Bad file descriptor 00:18:38.082 [2024-05-15 10:58:49.682190] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.082 Running I/O for 1 seconds... 00:18:38.082 00:18:38.082 Latency(us) 00:18:38.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.082 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:38.082 Verification LBA range: start 0x0 length 0x4000 00:18:38.082 NVMe0n1 : 1.01 8002.67 31.26 0.00 0.00 15915.62 3470.98 17670.45 00:18:38.082 =================================================================================================================== 00:18:38.082 Total : 8002.67 31.26 0.00 0.00 15915.62 3470.98 17670.45 00:18:38.082 10:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:38.082 10:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:18:38.082 10:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:38.352 10:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:38.352 10:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:18:38.922 10:58:54 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:38.922 10:58:55 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2847889 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2847889 ']' 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2847889 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2847889 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2847889' 00:18:42.202 killing process with pid 2847889 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2847889 00:18:42.202 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2847889 00:18:42.460 10:58:58 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:18:42.460 10:58:58 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.718 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.718 rmmod nvme_tcp 00:18:42.718 rmmod nvme_fabrics 00:18:42.976 rmmod nvme_keyring 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2845613 ']' 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2845613 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@946 -- # '[' -z 2845613 ']' 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@950 -- # kill -0 2845613 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # uname 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:42.976 10:58:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2845613 00:18:42.976 10:58:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:42.976 10:58:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:42.976 10:58:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2845613' 00:18:42.976 killing process with pid 2845613 00:18:42.976 10:58:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # kill 2845613 00:18:42.976 [2024-05-15 10:58:59.002235] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:42.976 10:58:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@970 -- # wait 2845613 00:18:43.235 10:58:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.235 10:58:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.235 10:58:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.235 10:58:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.235 10:58:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.235 10:58:59 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.235 10:58:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.235 10:58:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.142 10:59:01 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:45.142 00:18:45.142 real 0m36.121s 00:18:45.142 user 2m2.813s 00:18:45.142 sys 0m7.207s 00:18:45.142 10:59:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:45.142 10:59:01 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:18:45.142 ************************************ 00:18:45.142 END TEST nvmf_failover 00:18:45.142 ************************************ 00:18:45.401 10:59:01 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:45.401 10:59:01 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:45.401 10:59:01 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:45.401 10:59:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:45.401 ************************************ 00:18:45.401 START TEST nvmf_host_discovery 00:18:45.401 ************************************ 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:18:45.401 * Looking for test storage... 00:18:45.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.401 10:59:01 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:47.933 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:47.933 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:47.933 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:47.933 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.933 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:18:47.934 00:18:47.934 --- 10.0.0.2 ping statistics --- 00:18:47.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.934 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:18:47.934 00:18:47.934 --- 10.0.0.1 ping statistics --- 00:18:47.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.934 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@720 -- # xtrace_disable 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2851567 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2851567 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2851567 ']' 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:47.934 10:59:03 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:47.934 [2024-05-15 10:59:04.021468] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:47.934 [2024-05-15 10:59:04.021546] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.934 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.934 [2024-05-15 10:59:04.096762] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.221 [2024-05-15 10:59:04.211936] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:48.221 [2024-05-15 10:59:04.212006] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:48.221 [2024-05-15 10:59:04.212035] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:48.221 [2024-05-15 10:59:04.212047] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:48.221 [2024-05-15 10:59:04.212058] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:48.222 [2024-05-15 10:59:04.212085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.222 [2024-05-15 10:59:04.359233] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.222 [2024-05-15 10:59:04.367183] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:48.222 [2024-05-15 10:59:04.367467] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.222 null0 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.222 null1 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2851597 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2851597 /tmp/host.sock 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@827 -- # '[' -z 2851597 ']' 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local max_retries=100 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:18:48.222 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # xtrace_disable 00:18:48.222 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.480 [2024-05-15 10:59:04.439619] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:18:48.480 [2024-05-15 10:59:04.439701] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2851597 ] 00:18:48.480 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.480 [2024-05-15 10:59:04.512280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.480 [2024-05-15 10:59:04.629780] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@860 -- # return 0 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.738 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.739 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.997 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:18:48.997 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:18:48.997 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.997 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.997 10:59:04 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.997 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.997 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.997 10:59:04 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.997 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.997 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:18:48.997 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:48.997 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.997 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.997 [2024-05-15 10:59:05.037194] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.997 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == \n\v\m\e\0 ]] 00:18:48.998 10:59:05 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:18:49.931 [2024-05-15 10:59:05.818208] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:49.931 [2024-05-15 10:59:05.818239] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:49.931 [2024-05-15 10:59:05.818282] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:49.931 [2024-05-15 10:59:05.905554] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:18:49.931 [2024-05-15 10:59:06.128980] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:49.931 [2024-05-15 10:59:06.129004] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0 ]] 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:50.189 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.190 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:50.448 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.449 [2024-05-15 10:59:06.505456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:50.449 [2024-05-15 10:59:06.506117] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:50.449 [2024-05-15 10:59:06.506166] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:18:50.449 10:59:06 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:18:50.449 [2024-05-15 10:59:06.633938] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:18:50.707 [2024-05-15 10:59:06.731722] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:50.707 [2024-05-15 10:59:06.731749] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:18:50.707 [2024-05-15 10:59:06.731760] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.641 [2024-05-15 10:59:07.717866] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:18:51.641 [2024-05-15 10:59:07.717902] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:51.641 [2024-05-15 10:59:07.725874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.641 [2024-05-15 10:59:07.725912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.641 [2024-05-15 10:59:07.725953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.641 [2024-05-15 10:59:07.725968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.641 [2024-05-15 10:59:07.725981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.641 [2024-05-15 10:59:07.725995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.641 [2024-05-15 10:59:07.726010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:51.641 [2024-05-15 10:59:07.726023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:51.641 [2024-05-15 10:59:07.726037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.641 [2024-05-15 10:59:07.735867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.641 [2024-05-15 10:59:07.745916] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.641 [2024-05-15 10:59:07.746244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.641 [2024-05-15 10:59:07.746276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.641 [2024-05-15 10:59:07.746294] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.641 [2024-05-15 10:59:07.746320] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.641 [2024-05-15 10:59:07.746360] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.641 [2024-05-15 10:59:07.746380] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.641 [2024-05-15 10:59:07.746397] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.641 [2024-05-15 10:59:07.746420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.641 [2024-05-15 10:59:07.756015] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.641 [2024-05-15 10:59:07.756332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.641 [2024-05-15 10:59:07.756362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.641 [2024-05-15 10:59:07.756381] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.641 [2024-05-15 10:59:07.756405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.641 [2024-05-15 10:59:07.756441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.641 [2024-05-15 10:59:07.756461] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.641 [2024-05-15 10:59:07.756476] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.641 [2024-05-15 10:59:07.756497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.641 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:18:51.642 [2024-05-15 10:59:07.766083] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.642 [2024-05-15 10:59:07.766393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.642 [2024-05-15 10:59:07.766422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.642 [2024-05-15 10:59:07.766438] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.642 [2024-05-15 10:59:07.766460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.642 [2024-05-15 10:59:07.766508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.642 [2024-05-15 10:59:07.766526] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.642 [2024-05-15 10:59:07.766539] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.642 [2024-05-15 10:59:07.766576] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:51.642 [2024-05-15 10:59:07.776187] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.642 [2024-05-15 10:59:07.776478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.642 [2024-05-15 10:59:07.776509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.642 [2024-05-15 10:59:07.776527] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.642 [2024-05-15 10:59:07.776551] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.642 [2024-05-15 10:59:07.776588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.642 [2024-05-15 10:59:07.776607] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.642 [2024-05-15 10:59:07.776623] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.642 [2024-05-15 10:59:07.776644] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.642 [2024-05-15 10:59:07.786260] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.642 [2024-05-15 10:59:07.786569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.642 [2024-05-15 10:59:07.786599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.642 [2024-05-15 10:59:07.786623] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.642 [2024-05-15 10:59:07.786648] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.642 [2024-05-15 10:59:07.786686] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.642 [2024-05-15 10:59:07.786706] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.642 [2024-05-15 10:59:07.786721] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.642 [2024-05-15 10:59:07.786755] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.642 [2024-05-15 10:59:07.796350] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.642 [2024-05-15 10:59:07.796629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.642 [2024-05-15 10:59:07.796659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.642 [2024-05-15 10:59:07.796677] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.642 [2024-05-15 10:59:07.796701] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.642 [2024-05-15 10:59:07.796737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.642 [2024-05-15 10:59:07.796756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.642 [2024-05-15 10:59:07.796771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.642 [2024-05-15 10:59:07.796792] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.642 [2024-05-15 10:59:07.806426] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.642 [2024-05-15 10:59:07.806697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.642 [2024-05-15 10:59:07.806729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.642 [2024-05-15 10:59:07.806746] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.642 [2024-05-15 10:59:07.806771] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.642 [2024-05-15 10:59:07.806822] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.642 [2024-05-15 10:59:07.806843] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.642 [2024-05-15 10:59:07.806859] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.642 [2024-05-15 10:59:07.806880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:51.642 [2024-05-15 10:59:07.816507] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.642 [2024-05-15 10:59:07.816772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.642 [2024-05-15 10:59:07.816802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.642 [2024-05-15 10:59:07.816820] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.642 [2024-05-15 10:59:07.816844] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.642 [2024-05-15 10:59:07.816867] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.642 [2024-05-15 10:59:07.816883] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.642 [2024-05-15 10:59:07.816898] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.642 [2024-05-15 10:59:07.816919] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.642 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.642 [2024-05-15 10:59:07.826588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.642 [2024-05-15 10:59:07.826844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.642 [2024-05-15 10:59:07.826874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.642 [2024-05-15 10:59:07.826891] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.642 [2024-05-15 10:59:07.826916] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.642 [2024-05-15 10:59:07.826963] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.642 [2024-05-15 10:59:07.827005] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.642 [2024-05-15 10:59:07.827019] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.642 [2024-05-15 10:59:07.827039] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.642 [2024-05-15 10:59:07.836665] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.642 [2024-05-15 10:59:07.836955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.642 [2024-05-15 10:59:07.836999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.642 [2024-05-15 10:59:07.837016] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.642 [2024-05-15 10:59:07.837037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.642 [2024-05-15 10:59:07.837058] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.642 [2024-05-15 10:59:07.837077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.642 [2024-05-15 10:59:07.837090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.642 [2024-05-15 10:59:07.837109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.642 [2024-05-15 10:59:07.846742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:51.643 [2024-05-15 10:59:07.847022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.643 [2024-05-15 10:59:07.847050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1491840 with addr=10.0.0.2, port=4420 00:18:51.643 [2024-05-15 10:59:07.847066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1491840 is same with the state(5) to be set 00:18:51.643 [2024-05-15 10:59:07.847089] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1491840 (9): Bad file descriptor 00:18:51.643 [2024-05-15 10:59:07.847123] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:18:51.643 [2024-05-15 10:59:07.847140] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:18:51.643 [2024-05-15 10:59:07.847153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:18:51.643 [2024-05-15 10:59:07.847172] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:51.643 [2024-05-15 10:59:07.847230] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:18:51.643 [2024-05-15 10:59:07.847256] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:51.643 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:18:51.643 10:59:07 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # sleep 1 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_paths nvme0 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ 4421 == \4\4\2\1 ]] 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.016 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_subsystem_names 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_bdev_list 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.017 10:59:08 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # [[ '' == '' ]] 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@910 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local max=10 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # (( max-- )) 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # get_notification_count 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( notification_count == expected_count )) 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # return 0 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.017 10:59:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:53.952 [2024-05-15 10:59:10.140177] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:18:53.952 [2024-05-15 10:59:10.140230] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:18:53.952 [2024-05-15 10:59:10.140257] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:18:54.211 [2024-05-15 10:59:10.226536] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:18:54.211 [2024-05-15 10:59:10.334046] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:18:54.211 [2024-05-15 10:59:10.334093] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.211 request: 00:18:54.211 { 00:18:54.211 "name": "nvme", 00:18:54.211 "trtype": "tcp", 00:18:54.211 "traddr": "10.0.0.2", 00:18:54.211 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:54.211 "adrfam": "ipv4", 00:18:54.211 "trsvcid": "8009", 00:18:54.211 "wait_for_attach": true, 00:18:54.211 "method": "bdev_nvme_start_discovery", 00:18:54.211 "req_id": 1 00:18:54.211 } 00:18:54.211 Got JSON-RPC error response 00:18:54.211 response: 00:18:54.211 { 00:18:54.211 "code": -17, 00:18:54.211 "message": "File exists" 00:18:54.211 } 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:54.211 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.212 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.470 request: 00:18:54.470 { 00:18:54.470 "name": "nvme_second", 00:18:54.470 "trtype": "tcp", 00:18:54.470 "traddr": "10.0.0.2", 00:18:54.471 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:54.471 "adrfam": "ipv4", 00:18:54.471 "trsvcid": "8009", 00:18:54.471 "wait_for_attach": true, 00:18:54.471 "method": "bdev_nvme_start_discovery", 00:18:54.471 "req_id": 1 00:18:54.471 } 00:18:54.471 Got JSON-RPC error response 00:18:54.471 response: 00:18:54.471 { 00:18:54.471 "code": -17, 00:18:54.471 "message": "File exists" 00:18:54.471 } 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.471 10:59:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:55.406 [2024-05-15 10:59:11.545651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:55.406 [2024-05-15 10:59:11.545710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1494c00 with addr=10.0.0.2, port=8010 00:18:55.406 [2024-05-15 10:59:11.545755] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:55.406 [2024-05-15 10:59:11.545773] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:55.406 [2024-05-15 10:59:11.545788] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:56.342 [2024-05-15 10:59:12.548001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.342 [2024-05-15 10:59:12.548036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1494c00 with addr=10.0.0.2, port=8010 00:18:56.342 [2024-05-15 10:59:12.548057] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:18:56.342 [2024-05-15 10:59:12.548070] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:18:56.342 [2024-05-15 10:59:12.548082] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:18:57.719 [2024-05-15 10:59:13.550186] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:18:57.719 request: 00:18:57.719 { 00:18:57.719 "name": "nvme_second", 00:18:57.719 "trtype": "tcp", 00:18:57.719 "traddr": "10.0.0.2", 00:18:57.719 "hostnqn": "nqn.2021-12.io.spdk:test", 00:18:57.719 "adrfam": "ipv4", 00:18:57.719 "trsvcid": "8010", 00:18:57.719 "attach_timeout_ms": 3000, 00:18:57.719 "method": "bdev_nvme_start_discovery", 00:18:57.719 "req_id": 1 00:18:57.719 } 00:18:57.719 Got JSON-RPC error response 00:18:57.719 response: 00:18:57.719 { 00:18:57.719 "code": -110, 00:18:57.719 "message": "Connection timed out" 00:18:57.719 } 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2851597 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:57.719 rmmod nvme_tcp 00:18:57.719 rmmod nvme_fabrics 00:18:57.719 rmmod nvme_keyring 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2851567 ']' 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2851567 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@946 -- # '[' -z 2851567 ']' 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@950 -- # kill -0 2851567 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # uname 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2851567 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2851567' 00:18:57.719 killing process with pid 2851567 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # kill 2851567 00:18:57.719 [2024-05-15 10:59:13.687101] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@970 -- # wait 2851567 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:57.719 10:59:13 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.978 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.978 10:59:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.882 10:59:15 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:59.882 00:18:59.882 real 0m14.580s 00:18:59.882 user 0m21.143s 00:18:59.882 sys 0m3.211s 00:18:59.882 10:59:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # xtrace_disable 00:18:59.883 10:59:15 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:18:59.883 ************************************ 00:18:59.883 END TEST nvmf_host_discovery 00:18:59.883 ************************************ 00:18:59.883 10:59:16 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:59.883 10:59:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:18:59.883 10:59:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:18:59.883 10:59:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:59.883 ************************************ 00:18:59.883 START TEST nvmf_host_multipath_status 00:18:59.883 ************************************ 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:18:59.883 * Looking for test storage... 00:18:59.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:59.883 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:19:00.141 10:59:16 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:02.673 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:02.673 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:02.673 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:02.673 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.673 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:02.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:19:02.674 00:19:02.674 --- 10.0.0.2 ping statistics --- 00:19:02.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.674 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:19:02.674 00:19:02.674 --- 10.0.0.1 ping statistics --- 00:19:02.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.674 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2855177 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2855177 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2855177 ']' 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:02.674 10:59:18 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:02.674 [2024-05-15 10:59:18.735037] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:19:02.674 [2024-05-15 10:59:18.735119] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.674 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.674 [2024-05-15 10:59:18.815453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:02.933 [2024-05-15 10:59:18.937206] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.933 [2024-05-15 10:59:18.937276] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.933 [2024-05-15 10:59:18.937292] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.933 [2024-05-15 10:59:18.937305] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.933 [2024-05-15 10:59:18.937317] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.933 [2024-05-15 10:59:18.937440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.933 [2024-05-15 10:59:18.937446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.933 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:02.933 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:19:02.933 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.933 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:02.933 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:02.933 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.933 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2855177 00:19:02.933 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:03.191 [2024-05-15 10:59:19.305512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.191 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:03.449 Malloc0 00:19:03.449 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:03.716 10:59:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:04.005 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.272 [2024-05-15 10:59:20.324682] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:04.272 [2024-05-15 10:59:20.324999] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.272 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:04.531 [2024-05-15 10:59:20.569585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2855455 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2855455 /var/tmp/bdevperf.sock 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@827 -- # '[' -z 2855455 ']' 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:04.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:04.531 10:59:20 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:05.465 10:59:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:05.465 10:59:21 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # return 0 00:19:05.465 10:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:05.723 10:59:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:06.290 Nvme0n1 00:19:06.290 10:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:06.548 Nvme0n1 00:19:06.807 10:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:06.807 10:59:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:08.717 10:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:08.717 10:59:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:08.976 10:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:09.235 10:59:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:10.170 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:10.170 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:10.170 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.170 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:10.428 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.428 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:10.428 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.428 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:10.688 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:10.688 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:10.688 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.688 10:59:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:10.947 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:10.947 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:10.947 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:10.947 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:11.206 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.206 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:11.206 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.206 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:11.467 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.467 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:11.467 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:11.467 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:11.728 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:11.728 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:11.728 10:59:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:11.986 10:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:12.245 10:59:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:19:13.179 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:19:13.179 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:13.179 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.180 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:13.438 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:13.438 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:13.438 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.438 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:13.697 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.697 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:13.697 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.697 10:59:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:13.955 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:13.955 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:13.955 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:13.955 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:14.213 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.213 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:14.213 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.213 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:14.472 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.472 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:14.472 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:14.472 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:14.730 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:14.730 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:19:14.730 10:59:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:14.989 10:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:15.247 10:59:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:19:16.182 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:19:16.182 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:16.182 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.182 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:16.440 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.440 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:16.440 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.440 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:16.699 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:16.699 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:16.699 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.699 10:59:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:16.957 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:16.957 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:16.957 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:16.957 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:17.215 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.215 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:17.215 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.215 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:17.472 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.472 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:17.472 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:17.472 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:17.771 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:17.771 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:19:17.771 10:59:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:18.029 10:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:18.286 10:59:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:19:19.217 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:19:19.217 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:19.217 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.217 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:19.474 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.474 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:19.474 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.474 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:19.731 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:19.731 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:19.731 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.731 10:59:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:19.988 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:19.988 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:19.988 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:19.988 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:20.246 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.246 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:20.246 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.246 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:20.503 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:20.503 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:20.503 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:20.503 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:20.761 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:20.761 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:19:20.761 10:59:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:21.018 10:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:21.274 10:59:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:19:22.205 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:19:22.205 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:22.205 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.205 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:22.462 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:22.462 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:22.462 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.462 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:22.719 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:22.719 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:22.719 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.719 10:59:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:22.977 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:22.977 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:22.977 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:22.977 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:23.233 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:23.233 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:23.233 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.233 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:23.490 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:23.490 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:23.490 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:23.490 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:23.748 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:23.748 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:19:23.748 10:59:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:19:24.005 10:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:24.262 10:59:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:19:25.196 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:19:25.196 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:25.196 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.196 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:25.454 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:25.454 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:25.454 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.454 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:25.712 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.712 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:25.712 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.712 10:59:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:25.970 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:25.970 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:25.970 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:25.970 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:26.228 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.228 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:19:26.228 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.228 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:26.487 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:26.487 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:26.487 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:26.487 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:26.745 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:26.745 10:59:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:19:27.003 10:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:19:27.003 10:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:19:27.260 10:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:27.518 10:59:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:19:28.451 10:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:19:28.451 10:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:28.451 10:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.452 10:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:28.710 10:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.710 10:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:28.710 10:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.710 10:59:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:28.968 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:28.968 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:28.968 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:28.968 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:29.226 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.226 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:29.226 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.226 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:29.484 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.484 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:29.484 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.484 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:29.743 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:29.743 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:29.743 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:29.743 10:59:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:30.001 10:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:30.001 10:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:19:30.001 10:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:30.259 10:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:19:30.517 10:59:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:19:31.486 10:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:19:31.486 10:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:19:31.486 10:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.486 10:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:31.745 10:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:31.745 10:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:31.745 10:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:31.745 10:59:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:32.003 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.003 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:32.003 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.003 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:32.261 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.261 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:32.261 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.261 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:32.520 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.520 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:32.520 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.520 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:32.778 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:32.778 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:32.778 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:32.778 10:59:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:33.036 10:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:33.036 10:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:19:33.036 10:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:33.294 10:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:19:33.552 10:59:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:19:34.487 10:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:19:34.487 10:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:34.487 10:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.487 10:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:34.745 10:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:34.745 10:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:19:34.745 10:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:34.745 10:59:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:35.002 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.002 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:35.002 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.002 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:35.260 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.260 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:35.260 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.260 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:35.518 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.518 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:35.518 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.518 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:35.776 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:35.776 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:35.776 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:35.776 10:59:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:36.034 10:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:36.034 10:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:19:36.034 10:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:19:36.291 10:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:19:36.549 10:59:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:19:37.483 10:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:19:37.483 10:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:37.483 10:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.483 10:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:37.741 10:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:37.741 10:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:37.741 10:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:37.741 10:59:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:38.000 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:38.000 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:38.000 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.000 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:38.259 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.259 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:38.259 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.259 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:38.518 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.518 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:38.518 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.518 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:38.776 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:38.776 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:19:38.776 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:38.776 10:59:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2855455 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2855455 ']' 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2855455 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2855455 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_2 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_2 = sudo ']' 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2855455' 00:19:39.035 killing process with pid 2855455 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2855455 00:19:39.035 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2855455 00:19:39.035 Connection closed with partial response: 00:19:39.035 00:19:39.035 00:19:39.312 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2855455 00:19:39.312 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:39.312 [2024-05-15 10:59:20.635814] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:19:39.312 [2024-05-15 10:59:20.635898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2855455 ] 00:19:39.312 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.312 [2024-05-15 10:59:20.707724] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.312 [2024-05-15 10:59:20.818290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.312 Running I/O for 90 seconds... 00:19:39.312 [2024-05-15 10:59:37.045692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.045745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.045826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.045847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.045872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.045889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.045911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.045950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.045986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.046003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.046026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.046042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.046065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.046081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.046104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.046120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.046142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.312 [2024-05-15 10:59:37.046159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:19:39.312 [2024-05-15 10:59:37.046181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.046958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.046983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.047000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.047023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.047039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.047062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.047079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.047101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.047117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.047140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.047157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.048985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.313 [2024-05-15 10:59:37.049011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:54768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:54784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:54800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:54808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:54816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:54824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:19:39.313 [2024-05-15 10:59:37.049733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:54832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.313 [2024-05-15 10:59:37.049748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.049773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:54840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.049788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.049814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:54848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.049829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.049854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:54856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.049869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.049894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.049909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.049955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:54872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.049974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:54880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:54888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:54896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:54904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:54912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:54920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:54936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:54952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:54960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:54968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:54984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:54992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:55008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:55024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.050960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.050991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.051007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.051035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.051051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.051079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:55048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.051095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.051123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:55056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.051140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.051168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.051184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:37.051213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:55072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:37.051229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.557941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.314 [2024-05-15 10:59:52.558007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.558096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.314 [2024-05-15 10:59:52.558117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.558143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.314 [2024-05-15 10:59:52.558160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.558193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.314 [2024-05-15 10:59:52.558210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.558233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.314 [2024-05-15 10:59:52.558250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.558320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:52.558342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.558367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:52.558383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.559458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:39.314 [2024-05-15 10:59:52.559484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.559509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:52.559526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.559548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:52.559565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.559588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:52.559604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.559626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:52.559642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:19:39.314 [2024-05-15 10:59:52.559664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.314 [2024-05-15 10:59:52.559680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.559703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.559718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.559741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.559757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.559780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.559801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.562975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.562991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.563968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.563991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.564008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.564031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.564047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:19:39.315 [2024-05-15 10:59:52.564070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:39.315 [2024-05-15 10:59:52.564087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:19:39.315 Received shutdown signal, test time was about 32.120887 seconds 00:19:39.315 00:19:39.315 Latency(us) 00:19:39.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.315 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:39.315 Verification LBA range: start 0x0 length 0x4000 00:19:39.315 Nvme0n1 : 32.12 7984.23 31.19 0.00 0.00 16005.90 394.43 4026531.84 00:19:39.315 =================================================================================================================== 00:19:39.315 Total : 7984.23 31.19 0.00 0.00 16005.90 394.43 4026531.84 00:19:39.315 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:39.574 rmmod nvme_tcp 00:19:39.574 rmmod nvme_fabrics 00:19:39.574 rmmod nvme_keyring 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2855177 ']' 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2855177 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@946 -- # '[' -z 2855177 ']' 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # kill -0 2855177 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # uname 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2855177 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2855177' 00:19:39.574 killing process with pid 2855177 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # kill 2855177 00:19:39.574 [2024-05-15 10:59:55.756046] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:39.574 10:59:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # wait 2855177 00:19:39.833 10:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:39.833 10:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:39.833 10:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:39.833 10:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.833 10:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:39.834 10:59:56 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.834 10:59:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.834 10:59:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.368 10:59:58 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:42.368 00:19:42.368 real 0m42.031s 00:19:42.368 user 2m5.614s 00:19:42.368 sys 0m10.774s 00:19:42.368 10:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:42.368 10:59:58 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:42.368 ************************************ 00:19:42.368 END TEST nvmf_host_multipath_status 00:19:42.368 ************************************ 00:19:42.368 10:59:58 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:42.368 10:59:58 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:42.368 10:59:58 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:42.368 10:59:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:42.368 ************************************ 00:19:42.368 START TEST nvmf_discovery_remove_ifc 00:19:42.368 ************************************ 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:19:42.368 * Looking for test storage... 00:19:42.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:19:42.368 10:59:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:44.268 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:44.268 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:44.268 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:44.268 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.268 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:44.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:19:44.527 00:19:44.527 --- 10.0.0.2 ping statistics --- 00:19:44.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.527 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:19:44.527 00:19:44.527 --- 10.0.0.1 ping statistics --- 00:19:44.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.527 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@720 -- # xtrace_disable 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2862015 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2862015 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2862015 ']' 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:44.527 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.527 [2024-05-15 11:00:00.641762] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:19:44.527 [2024-05-15 11:00:00.641847] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.527 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.527 [2024-05-15 11:00:00.720648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.786 [2024-05-15 11:00:00.840169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.786 [2024-05-15 11:00:00.840221] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.786 [2024-05-15 11:00:00.840236] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.786 [2024-05-15 11:00:00.840249] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.786 [2024-05-15 11:00:00.840260] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.786 [2024-05-15 11:00:00.840306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.786 11:00:00 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:44.786 [2024-05-15 11:00:00.994589] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.786 [2024-05-15 11:00:01.002557] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:44.786 [2024-05-15 11:00:01.002802] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:44.786 null0 00:19:45.045 [2024-05-15 11:00:01.034727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2862117 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2862117 /tmp/host.sock 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@827 -- # '[' -z 2862117 ']' 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # local rpc_addr=/tmp/host.sock 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local max_retries=100 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:45.045 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # xtrace_disable 00:19:45.045 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:45.045 [2024-05-15 11:00:01.102541] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:19:45.045 [2024-05-15 11:00:01.102632] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2862117 ] 00:19:45.045 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.045 [2024-05-15 11:00:01.175626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.330 [2024-05-15 11:00:01.286526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # return 0 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.330 11:00:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:46.259 [2024-05-15 11:00:02.475197] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:46.259 [2024-05-15 11:00:02.475237] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:46.259 [2024-05-15 11:00:02.475261] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:46.517 [2024-05-15 11:00:02.561562] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:19:46.517 [2024-05-15 11:00:02.743668] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:46.517 [2024-05-15 11:00:02.743732] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:46.517 [2024-05-15 11:00:02.743772] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:46.517 [2024-05-15 11:00:02.743802] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:19:46.517 [2024-05-15 11:00:02.743838] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:46.517 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:46.774 [2024-05-15 11:00:02.751695] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b49010 was disconnected and freed. delete nvme_qpair. 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:46.775 11:00:02 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:47.706 11:00:03 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:49.078 11:00:04 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:50.022 11:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:50.022 11:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.022 11:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.022 11:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:50.022 11:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:50.022 11:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:50.022 11:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:50.022 11:00:05 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.022 11:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:50.022 11:00:06 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:50.955 11:00:07 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:51.887 11:00:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:52.144 [2024-05-15 11:00:08.184775] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:19:52.144 [2024-05-15 11:00:08.184868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.144 [2024-05-15 11:00:08.184893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.144 [2024-05-15 11:00:08.184927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.144 [2024-05-15 11:00:08.184949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.144 [2024-05-15 11:00:08.184963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.144 [2024-05-15 11:00:08.184991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.144 [2024-05-15 11:00:08.185006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.144 [2024-05-15 11:00:08.185019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.144 [2024-05-15 11:00:08.185033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:52.144 [2024-05-15 11:00:08.185047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:52.144 [2024-05-15 11:00:08.185060] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b10380 is same with the state(5) to be set 00:19:52.144 [2024-05-15 11:00:08.194792] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b10380 (9): Bad file descriptor 00:19:52.144 [2024-05-15 11:00:08.204842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:53.077 [2024-05-15 11:00:09.228997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:19:53.077 [2024-05-15 11:00:09.229067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b10380 with addr=10.0.0.2, port=4420 00:19:53.077 [2024-05-15 11:00:09.229098] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b10380 is same with the state(5) to be set 00:19:53.077 [2024-05-15 11:00:09.229163] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b10380 (9): Bad file descriptor 00:19:53.077 [2024-05-15 11:00:09.229685] bdev_nvme.c:2879:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:53.077 [2024-05-15 11:00:09.229721] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:19:53.077 [2024-05-15 11:00:09.229738] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:19:53.077 [2024-05-15 11:00:09.229759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:19:53.077 [2024-05-15 11:00:09.229794] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:53.077 [2024-05-15 11:00:09.229813] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:19:53.077 11:00:09 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:54.008 [2024-05-15 11:00:10.232343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:54.008 [2024-05-15 11:00:10.232430] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:19:54.008 [2024-05-15 11:00:10.232493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.008 [2024-05-15 11:00:10.232519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.008 [2024-05-15 11:00:10.232543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.008 [2024-05-15 11:00:10.232559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.008 [2024-05-15 11:00:10.232576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.008 [2024-05-15 11:00:10.232592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.008 [2024-05-15 11:00:10.232606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.008 [2024-05-15 11:00:10.232623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.008 [2024-05-15 11:00:10.232640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:54.008 [2024-05-15 11:00:10.232656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:54.008 [2024-05-15 11:00:10.232673] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:19:54.008 [2024-05-15 11:00:10.232781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0f810 (9): Bad file descriptor 00:19:54.008 [2024-05-15 11:00:10.233811] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:19:54.008 [2024-05-15 11:00:10.233837] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:54.266 11:00:10 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:55.204 11:00:11 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:56.136 [2024-05-15 11:00:12.249412] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:19:56.136 [2024-05-15 11:00:12.249455] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:19:56.136 [2024-05-15 11:00:12.249477] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:19:56.136 [2024-05-15 11:00:12.336735] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:19:56.393 11:00:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:19:56.393 [2024-05-15 11:00:12.560470] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:19:56.393 [2024-05-15 11:00:12.560528] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:19:56.393 [2024-05-15 11:00:12.560567] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:19:56.393 [2024-05-15 11:00:12.560595] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:19:56.393 [2024-05-15 11:00:12.560611] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:19:56.393 [2024-05-15 11:00:12.567717] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1b53970 was disconnected and freed. delete nvme_qpair. 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2862117 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2862117 ']' 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2862117 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2862117 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2862117' 00:19:57.325 killing process with pid 2862117 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2862117 00:19:57.325 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2862117 00:19:57.582 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:19:57.582 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:57.582 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:19:57.582 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:57.582 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:19:57.582 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:57.582 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:57.582 rmmod nvme_tcp 00:19:57.582 rmmod nvme_fabrics 00:19:57.839 rmmod nvme_keyring 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2862015 ']' 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2862015 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@946 -- # '[' -z 2862015 ']' 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # kill -0 2862015 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # uname 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2862015 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2862015' 00:19:57.839 killing process with pid 2862015 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # kill 2862015 00:19:57.839 [2024-05-15 11:00:13.866075] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:57.839 11:00:13 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # wait 2862015 00:19:58.097 11:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:58.097 11:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:58.097 11:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:58.097 11:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.097 11:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:58.097 11:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.097 11:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.097 11:00:14 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.995 11:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:59.995 00:19:59.995 real 0m18.058s 00:19:59.995 user 0m25.845s 00:19:59.995 sys 0m3.285s 00:19:59.995 11:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # xtrace_disable 00:19:59.995 11:00:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:19:59.995 ************************************ 00:19:59.995 END TEST nvmf_discovery_remove_ifc 00:19:59.995 ************************************ 00:19:59.995 11:00:16 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:19:59.995 11:00:16 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:19:59.995 11:00:16 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:19:59.995 11:00:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:00.293 ************************************ 00:20:00.293 START TEST nvmf_identify_kernel_target 00:20:00.293 ************************************ 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:00.293 * Looking for test storage... 00:20:00.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:20:00.293 11:00:16 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:02.821 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:02.821 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:02.821 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:02.821 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:02.822 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:02.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:02.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:20:02.822 00:20:02.822 --- 10.0.0.2 ping statistics --- 00:20:02.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.822 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:02.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:02.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:20:02.822 00:20:02.822 --- 10.0.0.1 ping statistics --- 00:20:02.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:02.822 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@728 -- # local ip 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:02.822 11:00:18 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:20:04.196 Waiting for block devices as requested 00:20:04.196 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:20:04.196 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:04.453 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:04.453 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:04.453 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:04.453 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:04.710 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:04.710 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:04.710 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:04.710 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:04.968 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:04.968 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:04.968 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:04.968 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:05.226 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:05.226 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:05.226 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:05.226 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:05.487 No valid GPT data, bailing 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:20:05.487 00:20:05.487 Discovery Log Number of Records 2, Generation counter 2 00:20:05.487 =====Discovery Log Entry 0====== 00:20:05.487 trtype: tcp 00:20:05.487 adrfam: ipv4 00:20:05.487 subtype: current discovery subsystem 00:20:05.487 treq: not specified, sq flow control disable supported 00:20:05.487 portid: 1 00:20:05.487 trsvcid: 4420 00:20:05.487 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:05.487 traddr: 10.0.0.1 00:20:05.487 eflags: none 00:20:05.487 sectype: none 00:20:05.487 =====Discovery Log Entry 1====== 00:20:05.487 trtype: tcp 00:20:05.487 adrfam: ipv4 00:20:05.487 subtype: nvme subsystem 00:20:05.487 treq: not specified, sq flow control disable supported 00:20:05.487 portid: 1 00:20:05.487 trsvcid: 4420 00:20:05.487 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:05.487 traddr: 10.0.0.1 00:20:05.487 eflags: none 00:20:05.487 sectype: none 00:20:05.487 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:05.487 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:05.487 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.487 ===================================================== 00:20:05.487 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:05.487 ===================================================== 00:20:05.487 Controller Capabilities/Features 00:20:05.487 ================================ 00:20:05.487 Vendor ID: 0000 00:20:05.487 Subsystem Vendor ID: 0000 00:20:05.487 Serial Number: 69db157e67705e0780d9 00:20:05.487 Model Number: Linux 00:20:05.487 Firmware Version: 6.7.0-68 00:20:05.487 Recommended Arb Burst: 0 00:20:05.487 IEEE OUI Identifier: 00 00 00 00:20:05.487 Multi-path I/O 00:20:05.487 May have multiple subsystem ports: No 00:20:05.487 May have multiple controllers: No 00:20:05.487 Associated with SR-IOV VF: No 00:20:05.487 Max Data Transfer Size: Unlimited 00:20:05.487 Max Number of Namespaces: 0 00:20:05.487 Max Number of I/O Queues: 1024 00:20:05.487 NVMe Specification Version (VS): 1.3 00:20:05.487 NVMe Specification Version (Identify): 1.3 00:20:05.487 Maximum Queue Entries: 1024 00:20:05.487 Contiguous Queues Required: No 00:20:05.487 Arbitration Mechanisms Supported 00:20:05.487 Weighted Round Robin: Not Supported 00:20:05.487 Vendor Specific: Not Supported 00:20:05.487 Reset Timeout: 7500 ms 00:20:05.487 Doorbell Stride: 4 bytes 00:20:05.487 NVM Subsystem Reset: Not Supported 00:20:05.487 Command Sets Supported 00:20:05.487 NVM Command Set: Supported 00:20:05.487 Boot Partition: Not Supported 00:20:05.487 Memory Page Size Minimum: 4096 bytes 00:20:05.487 Memory Page Size Maximum: 4096 bytes 00:20:05.487 Persistent Memory Region: Not Supported 00:20:05.487 Optional Asynchronous Events Supported 00:20:05.487 Namespace Attribute Notices: Not Supported 00:20:05.487 Firmware Activation Notices: Not Supported 00:20:05.487 ANA Change Notices: Not Supported 00:20:05.487 PLE Aggregate Log Change Notices: Not Supported 00:20:05.487 LBA Status Info Alert Notices: Not Supported 00:20:05.487 EGE Aggregate Log Change Notices: Not Supported 00:20:05.487 Normal NVM Subsystem Shutdown event: Not Supported 00:20:05.487 Zone Descriptor Change Notices: Not Supported 00:20:05.487 Discovery Log Change Notices: Supported 00:20:05.487 Controller Attributes 00:20:05.487 128-bit Host Identifier: Not Supported 00:20:05.487 Non-Operational Permissive Mode: Not Supported 00:20:05.487 NVM Sets: Not Supported 00:20:05.487 Read Recovery Levels: Not Supported 00:20:05.487 Endurance Groups: Not Supported 00:20:05.487 Predictable Latency Mode: Not Supported 00:20:05.487 Traffic Based Keep ALive: Not Supported 00:20:05.487 Namespace Granularity: Not Supported 00:20:05.487 SQ Associations: Not Supported 00:20:05.487 UUID List: Not Supported 00:20:05.487 Multi-Domain Subsystem: Not Supported 00:20:05.487 Fixed Capacity Management: Not Supported 00:20:05.487 Variable Capacity Management: Not Supported 00:20:05.487 Delete Endurance Group: Not Supported 00:20:05.487 Delete NVM Set: Not Supported 00:20:05.487 Extended LBA Formats Supported: Not Supported 00:20:05.487 Flexible Data Placement Supported: Not Supported 00:20:05.487 00:20:05.487 Controller Memory Buffer Support 00:20:05.487 ================================ 00:20:05.487 Supported: No 00:20:05.487 00:20:05.487 Persistent Memory Region Support 00:20:05.487 ================================ 00:20:05.488 Supported: No 00:20:05.488 00:20:05.488 Admin Command Set Attributes 00:20:05.488 ============================ 00:20:05.488 Security Send/Receive: Not Supported 00:20:05.488 Format NVM: Not Supported 00:20:05.488 Firmware Activate/Download: Not Supported 00:20:05.488 Namespace Management: Not Supported 00:20:05.488 Device Self-Test: Not Supported 00:20:05.488 Directives: Not Supported 00:20:05.488 NVMe-MI: Not Supported 00:20:05.488 Virtualization Management: Not Supported 00:20:05.488 Doorbell Buffer Config: Not Supported 00:20:05.488 Get LBA Status Capability: Not Supported 00:20:05.488 Command & Feature Lockdown Capability: Not Supported 00:20:05.488 Abort Command Limit: 1 00:20:05.488 Async Event Request Limit: 1 00:20:05.488 Number of Firmware Slots: N/A 00:20:05.488 Firmware Slot 1 Read-Only: N/A 00:20:05.488 Firmware Activation Without Reset: N/A 00:20:05.488 Multiple Update Detection Support: N/A 00:20:05.488 Firmware Update Granularity: No Information Provided 00:20:05.488 Per-Namespace SMART Log: No 00:20:05.488 Asymmetric Namespace Access Log Page: Not Supported 00:20:05.488 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:05.488 Command Effects Log Page: Not Supported 00:20:05.488 Get Log Page Extended Data: Supported 00:20:05.488 Telemetry Log Pages: Not Supported 00:20:05.488 Persistent Event Log Pages: Not Supported 00:20:05.488 Supported Log Pages Log Page: May Support 00:20:05.488 Commands Supported & Effects Log Page: Not Supported 00:20:05.488 Feature Identifiers & Effects Log Page:May Support 00:20:05.488 NVMe-MI Commands & Effects Log Page: May Support 00:20:05.488 Data Area 4 for Telemetry Log: Not Supported 00:20:05.488 Error Log Page Entries Supported: 1 00:20:05.488 Keep Alive: Not Supported 00:20:05.488 00:20:05.488 NVM Command Set Attributes 00:20:05.488 ========================== 00:20:05.488 Submission Queue Entry Size 00:20:05.488 Max: 1 00:20:05.488 Min: 1 00:20:05.488 Completion Queue Entry Size 00:20:05.488 Max: 1 00:20:05.488 Min: 1 00:20:05.488 Number of Namespaces: 0 00:20:05.488 Compare Command: Not Supported 00:20:05.488 Write Uncorrectable Command: Not Supported 00:20:05.488 Dataset Management Command: Not Supported 00:20:05.488 Write Zeroes Command: Not Supported 00:20:05.488 Set Features Save Field: Not Supported 00:20:05.488 Reservations: Not Supported 00:20:05.488 Timestamp: Not Supported 00:20:05.488 Copy: Not Supported 00:20:05.488 Volatile Write Cache: Not Present 00:20:05.488 Atomic Write Unit (Normal): 1 00:20:05.488 Atomic Write Unit (PFail): 1 00:20:05.488 Atomic Compare & Write Unit: 1 00:20:05.488 Fused Compare & Write: Not Supported 00:20:05.488 Scatter-Gather List 00:20:05.488 SGL Command Set: Supported 00:20:05.488 SGL Keyed: Not Supported 00:20:05.488 SGL Bit Bucket Descriptor: Not Supported 00:20:05.488 SGL Metadata Pointer: Not Supported 00:20:05.488 Oversized SGL: Not Supported 00:20:05.488 SGL Metadata Address: Not Supported 00:20:05.488 SGL Offset: Supported 00:20:05.488 Transport SGL Data Block: Not Supported 00:20:05.488 Replay Protected Memory Block: Not Supported 00:20:05.488 00:20:05.488 Firmware Slot Information 00:20:05.488 ========================= 00:20:05.488 Active slot: 0 00:20:05.488 00:20:05.488 00:20:05.488 Error Log 00:20:05.488 ========= 00:20:05.488 00:20:05.488 Active Namespaces 00:20:05.488 ================= 00:20:05.488 Discovery Log Page 00:20:05.488 ================== 00:20:05.488 Generation Counter: 2 00:20:05.488 Number of Records: 2 00:20:05.488 Record Format: 0 00:20:05.488 00:20:05.488 Discovery Log Entry 0 00:20:05.488 ---------------------- 00:20:05.488 Transport Type: 3 (TCP) 00:20:05.488 Address Family: 1 (IPv4) 00:20:05.488 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:05.488 Entry Flags: 00:20:05.488 Duplicate Returned Information: 0 00:20:05.488 Explicit Persistent Connection Support for Discovery: 0 00:20:05.488 Transport Requirements: 00:20:05.488 Secure Channel: Not Specified 00:20:05.488 Port ID: 1 (0x0001) 00:20:05.488 Controller ID: 65535 (0xffff) 00:20:05.488 Admin Max SQ Size: 32 00:20:05.488 Transport Service Identifier: 4420 00:20:05.488 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:05.488 Transport Address: 10.0.0.1 00:20:05.488 Discovery Log Entry 1 00:20:05.488 ---------------------- 00:20:05.488 Transport Type: 3 (TCP) 00:20:05.488 Address Family: 1 (IPv4) 00:20:05.488 Subsystem Type: 2 (NVM Subsystem) 00:20:05.488 Entry Flags: 00:20:05.488 Duplicate Returned Information: 0 00:20:05.488 Explicit Persistent Connection Support for Discovery: 0 00:20:05.488 Transport Requirements: 00:20:05.488 Secure Channel: Not Specified 00:20:05.488 Port ID: 1 (0x0001) 00:20:05.488 Controller ID: 65535 (0xffff) 00:20:05.488 Admin Max SQ Size: 32 00:20:05.488 Transport Service Identifier: 4420 00:20:05.488 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:05.488 Transport Address: 10.0.0.1 00:20:05.488 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:05.488 EAL: No free 2048 kB hugepages reported on node 1 00:20:05.488 get_feature(0x01) failed 00:20:05.488 get_feature(0x02) failed 00:20:05.488 get_feature(0x04) failed 00:20:05.488 ===================================================== 00:20:05.488 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:05.488 ===================================================== 00:20:05.488 Controller Capabilities/Features 00:20:05.488 ================================ 00:20:05.488 Vendor ID: 0000 00:20:05.488 Subsystem Vendor ID: 0000 00:20:05.488 Serial Number: e64dfcb6cffbedaeee0a 00:20:05.488 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:05.488 Firmware Version: 6.7.0-68 00:20:05.488 Recommended Arb Burst: 6 00:20:05.488 IEEE OUI Identifier: 00 00 00 00:20:05.488 Multi-path I/O 00:20:05.488 May have multiple subsystem ports: Yes 00:20:05.488 May have multiple controllers: Yes 00:20:05.488 Associated with SR-IOV VF: No 00:20:05.488 Max Data Transfer Size: Unlimited 00:20:05.488 Max Number of Namespaces: 1024 00:20:05.488 Max Number of I/O Queues: 128 00:20:05.488 NVMe Specification Version (VS): 1.3 00:20:05.488 NVMe Specification Version (Identify): 1.3 00:20:05.488 Maximum Queue Entries: 1024 00:20:05.488 Contiguous Queues Required: No 00:20:05.488 Arbitration Mechanisms Supported 00:20:05.488 Weighted Round Robin: Not Supported 00:20:05.488 Vendor Specific: Not Supported 00:20:05.488 Reset Timeout: 7500 ms 00:20:05.488 Doorbell Stride: 4 bytes 00:20:05.488 NVM Subsystem Reset: Not Supported 00:20:05.488 Command Sets Supported 00:20:05.488 NVM Command Set: Supported 00:20:05.488 Boot Partition: Not Supported 00:20:05.488 Memory Page Size Minimum: 4096 bytes 00:20:05.488 Memory Page Size Maximum: 4096 bytes 00:20:05.488 Persistent Memory Region: Not Supported 00:20:05.488 Optional Asynchronous Events Supported 00:20:05.488 Namespace Attribute Notices: Supported 00:20:05.488 Firmware Activation Notices: Not Supported 00:20:05.489 ANA Change Notices: Supported 00:20:05.489 PLE Aggregate Log Change Notices: Not Supported 00:20:05.489 LBA Status Info Alert Notices: Not Supported 00:20:05.489 EGE Aggregate Log Change Notices: Not Supported 00:20:05.489 Normal NVM Subsystem Shutdown event: Not Supported 00:20:05.489 Zone Descriptor Change Notices: Not Supported 00:20:05.489 Discovery Log Change Notices: Not Supported 00:20:05.489 Controller Attributes 00:20:05.489 128-bit Host Identifier: Supported 00:20:05.489 Non-Operational Permissive Mode: Not Supported 00:20:05.489 NVM Sets: Not Supported 00:20:05.489 Read Recovery Levels: Not Supported 00:20:05.489 Endurance Groups: Not Supported 00:20:05.489 Predictable Latency Mode: Not Supported 00:20:05.489 Traffic Based Keep ALive: Supported 00:20:05.489 Namespace Granularity: Not Supported 00:20:05.489 SQ Associations: Not Supported 00:20:05.489 UUID List: Not Supported 00:20:05.489 Multi-Domain Subsystem: Not Supported 00:20:05.489 Fixed Capacity Management: Not Supported 00:20:05.489 Variable Capacity Management: Not Supported 00:20:05.489 Delete Endurance Group: Not Supported 00:20:05.489 Delete NVM Set: Not Supported 00:20:05.489 Extended LBA Formats Supported: Not Supported 00:20:05.489 Flexible Data Placement Supported: Not Supported 00:20:05.489 00:20:05.489 Controller Memory Buffer Support 00:20:05.489 ================================ 00:20:05.489 Supported: No 00:20:05.489 00:20:05.489 Persistent Memory Region Support 00:20:05.489 ================================ 00:20:05.489 Supported: No 00:20:05.489 00:20:05.489 Admin Command Set Attributes 00:20:05.489 ============================ 00:20:05.489 Security Send/Receive: Not Supported 00:20:05.489 Format NVM: Not Supported 00:20:05.489 Firmware Activate/Download: Not Supported 00:20:05.489 Namespace Management: Not Supported 00:20:05.489 Device Self-Test: Not Supported 00:20:05.489 Directives: Not Supported 00:20:05.489 NVMe-MI: Not Supported 00:20:05.489 Virtualization Management: Not Supported 00:20:05.489 Doorbell Buffer Config: Not Supported 00:20:05.489 Get LBA Status Capability: Not Supported 00:20:05.489 Command & Feature Lockdown Capability: Not Supported 00:20:05.489 Abort Command Limit: 4 00:20:05.489 Async Event Request Limit: 4 00:20:05.489 Number of Firmware Slots: N/A 00:20:05.489 Firmware Slot 1 Read-Only: N/A 00:20:05.489 Firmware Activation Without Reset: N/A 00:20:05.489 Multiple Update Detection Support: N/A 00:20:05.489 Firmware Update Granularity: No Information Provided 00:20:05.489 Per-Namespace SMART Log: Yes 00:20:05.489 Asymmetric Namespace Access Log Page: Supported 00:20:05.489 ANA Transition Time : 10 sec 00:20:05.489 00:20:05.489 Asymmetric Namespace Access Capabilities 00:20:05.489 ANA Optimized State : Supported 00:20:05.489 ANA Non-Optimized State : Supported 00:20:05.489 ANA Inaccessible State : Supported 00:20:05.489 ANA Persistent Loss State : Supported 00:20:05.489 ANA Change State : Supported 00:20:05.489 ANAGRPID is not changed : No 00:20:05.489 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:05.489 00:20:05.489 ANA Group Identifier Maximum : 128 00:20:05.489 Number of ANA Group Identifiers : 128 00:20:05.489 Max Number of Allowed Namespaces : 1024 00:20:05.489 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:05.489 Command Effects Log Page: Supported 00:20:05.489 Get Log Page Extended Data: Supported 00:20:05.489 Telemetry Log Pages: Not Supported 00:20:05.489 Persistent Event Log Pages: Not Supported 00:20:05.489 Supported Log Pages Log Page: May Support 00:20:05.489 Commands Supported & Effects Log Page: Not Supported 00:20:05.489 Feature Identifiers & Effects Log Page:May Support 00:20:05.489 NVMe-MI Commands & Effects Log Page: May Support 00:20:05.489 Data Area 4 for Telemetry Log: Not Supported 00:20:05.489 Error Log Page Entries Supported: 128 00:20:05.489 Keep Alive: Supported 00:20:05.489 Keep Alive Granularity: 1000 ms 00:20:05.489 00:20:05.489 NVM Command Set Attributes 00:20:05.489 ========================== 00:20:05.489 Submission Queue Entry Size 00:20:05.489 Max: 64 00:20:05.489 Min: 64 00:20:05.489 Completion Queue Entry Size 00:20:05.489 Max: 16 00:20:05.489 Min: 16 00:20:05.489 Number of Namespaces: 1024 00:20:05.489 Compare Command: Not Supported 00:20:05.489 Write Uncorrectable Command: Not Supported 00:20:05.489 Dataset Management Command: Supported 00:20:05.489 Write Zeroes Command: Supported 00:20:05.489 Set Features Save Field: Not Supported 00:20:05.489 Reservations: Not Supported 00:20:05.489 Timestamp: Not Supported 00:20:05.489 Copy: Not Supported 00:20:05.489 Volatile Write Cache: Present 00:20:05.489 Atomic Write Unit (Normal): 1 00:20:05.489 Atomic Write Unit (PFail): 1 00:20:05.489 Atomic Compare & Write Unit: 1 00:20:05.489 Fused Compare & Write: Not Supported 00:20:05.489 Scatter-Gather List 00:20:05.489 SGL Command Set: Supported 00:20:05.489 SGL Keyed: Not Supported 00:20:05.489 SGL Bit Bucket Descriptor: Not Supported 00:20:05.489 SGL Metadata Pointer: Not Supported 00:20:05.489 Oversized SGL: Not Supported 00:20:05.489 SGL Metadata Address: Not Supported 00:20:05.489 SGL Offset: Supported 00:20:05.489 Transport SGL Data Block: Not Supported 00:20:05.489 Replay Protected Memory Block: Not Supported 00:20:05.489 00:20:05.489 Firmware Slot Information 00:20:05.489 ========================= 00:20:05.489 Active slot: 0 00:20:05.489 00:20:05.489 Asymmetric Namespace Access 00:20:05.489 =========================== 00:20:05.489 Change Count : 0 00:20:05.489 Number of ANA Group Descriptors : 1 00:20:05.489 ANA Group Descriptor : 0 00:20:05.489 ANA Group ID : 1 00:20:05.489 Number of NSID Values : 1 00:20:05.489 Change Count : 0 00:20:05.489 ANA State : 1 00:20:05.489 Namespace Identifier : 1 00:20:05.489 00:20:05.489 Commands Supported and Effects 00:20:05.489 ============================== 00:20:05.489 Admin Commands 00:20:05.489 -------------- 00:20:05.489 Get Log Page (02h): Supported 00:20:05.489 Identify (06h): Supported 00:20:05.489 Abort (08h): Supported 00:20:05.489 Set Features (09h): Supported 00:20:05.489 Get Features (0Ah): Supported 00:20:05.489 Asynchronous Event Request (0Ch): Supported 00:20:05.489 Keep Alive (18h): Supported 00:20:05.489 I/O Commands 00:20:05.489 ------------ 00:20:05.489 Flush (00h): Supported 00:20:05.489 Write (01h): Supported LBA-Change 00:20:05.489 Read (02h): Supported 00:20:05.489 Write Zeroes (08h): Supported LBA-Change 00:20:05.489 Dataset Management (09h): Supported 00:20:05.489 00:20:05.489 Error Log 00:20:05.489 ========= 00:20:05.489 Entry: 0 00:20:05.489 Error Count: 0x3 00:20:05.489 Submission Queue Id: 0x0 00:20:05.489 Command Id: 0x5 00:20:05.489 Phase Bit: 0 00:20:05.489 Status Code: 0x2 00:20:05.489 Status Code Type: 0x0 00:20:05.489 Do Not Retry: 1 00:20:05.489 Error Location: 0x28 00:20:05.489 LBA: 0x0 00:20:05.489 Namespace: 0x0 00:20:05.489 Vendor Log Page: 0x0 00:20:05.489 ----------- 00:20:05.489 Entry: 1 00:20:05.489 Error Count: 0x2 00:20:05.489 Submission Queue Id: 0x0 00:20:05.489 Command Id: 0x5 00:20:05.489 Phase Bit: 0 00:20:05.489 Status Code: 0x2 00:20:05.489 Status Code Type: 0x0 00:20:05.489 Do Not Retry: 1 00:20:05.489 Error Location: 0x28 00:20:05.489 LBA: 0x0 00:20:05.489 Namespace: 0x0 00:20:05.489 Vendor Log Page: 0x0 00:20:05.489 ----------- 00:20:05.489 Entry: 2 00:20:05.489 Error Count: 0x1 00:20:05.489 Submission Queue Id: 0x0 00:20:05.489 Command Id: 0x4 00:20:05.489 Phase Bit: 0 00:20:05.489 Status Code: 0x2 00:20:05.489 Status Code Type: 0x0 00:20:05.489 Do Not Retry: 1 00:20:05.489 Error Location: 0x28 00:20:05.489 LBA: 0x0 00:20:05.489 Namespace: 0x0 00:20:05.489 Vendor Log Page: 0x0 00:20:05.489 00:20:05.489 Number of Queues 00:20:05.489 ================ 00:20:05.489 Number of I/O Submission Queues: 128 00:20:05.489 Number of I/O Completion Queues: 128 00:20:05.489 00:20:05.490 ZNS Specific Controller Data 00:20:05.490 ============================ 00:20:05.490 Zone Append Size Limit: 0 00:20:05.490 00:20:05.490 00:20:05.490 Active Namespaces 00:20:05.490 ================= 00:20:05.490 get_feature(0x05) failed 00:20:05.490 Namespace ID:1 00:20:05.490 Command Set Identifier: NVM (00h) 00:20:05.490 Deallocate: Supported 00:20:05.490 Deallocated/Unwritten Error: Not Supported 00:20:05.490 Deallocated Read Value: Unknown 00:20:05.490 Deallocate in Write Zeroes: Not Supported 00:20:05.490 Deallocated Guard Field: 0xFFFF 00:20:05.490 Flush: Supported 00:20:05.490 Reservation: Not Supported 00:20:05.490 Namespace Sharing Capabilities: Multiple Controllers 00:20:05.490 Size (in LBAs): 1953525168 (931GiB) 00:20:05.490 Capacity (in LBAs): 1953525168 (931GiB) 00:20:05.490 Utilization (in LBAs): 1953525168 (931GiB) 00:20:05.490 UUID: a2c9d852-f3fb-4142-a2be-ed667749b611 00:20:05.490 Thin Provisioning: Not Supported 00:20:05.490 Per-NS Atomic Units: Yes 00:20:05.490 Atomic Boundary Size (Normal): 0 00:20:05.490 Atomic Boundary Size (PFail): 0 00:20:05.490 Atomic Boundary Offset: 0 00:20:05.490 NGUID/EUI64 Never Reused: No 00:20:05.490 ANA group ID: 1 00:20:05.490 Namespace Write Protected: No 00:20:05.490 Number of LBA Formats: 1 00:20:05.490 Current LBA Format: LBA Format #00 00:20:05.490 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:05.490 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.490 rmmod nvme_tcp 00:20:05.490 rmmod nvme_fabrics 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.490 11:00:21 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:08.020 11:00:23 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:08.955 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:08.955 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:08.955 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:08.955 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:08.955 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:08.955 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:08.955 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:08.955 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:08.955 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:08.955 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:08.955 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:08.955 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:08.955 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:08.955 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:08.955 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:08.955 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:09.892 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:20:10.150 00:20:10.150 real 0m9.959s 00:20:10.150 user 0m2.241s 00:20:10.150 sys 0m3.873s 00:20:10.150 11:00:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:10.150 11:00:26 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.150 ************************************ 00:20:10.150 END TEST nvmf_identify_kernel_target 00:20:10.150 ************************************ 00:20:10.150 11:00:26 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_auth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:10.150 11:00:26 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:10.150 11:00:26 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:10.150 11:00:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:10.150 ************************************ 00:20:10.150 START TEST nvmf_auth 00:20:10.150 ************************************ 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:10.150 * Looking for test storage... 00:20:10.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # uname -s 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- paths/export.sh@5 -- # export PATH 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@47 -- # : 0 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # keys=() 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@21 -- # ckeys=() 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- host/auth.sh@81 -- # nvmftestinit 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- nvmf/common.sh@285 -- # xtrace_disable 00:20:10.150 11:00:26 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # pci_devs=() 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # net_devs=() 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # e810=() 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@296 -- # local -ga e810 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # x722=() 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@297 -- # local -ga x722 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # mlx=() 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@298 -- # local -ga mlx 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:12.682 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:12.683 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:12.683 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:12.683 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:12.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@414 -- # is_hw=yes 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:12.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:12.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:20:12.683 00:20:12.683 --- 10.0.0.2 ping statistics --- 00:20:12.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.683 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:12.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:12.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:20:12.683 00:20:12.683 --- 10.0.0.1 ping statistics --- 00:20:12.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:12.683 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@422 -- # return 0 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- host/auth.sh@82 -- # nvmfappstart -L nvme_auth 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@720 -- # xtrace_disable 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@481 -- # nvmfpid=2870712 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- nvmf/common.sh@482 -- # waitforlisten 2870712 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 2870712 ']' 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:12.683 11:00:28 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@83 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key null 32 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=44adef24c08c8022306e1b7991a9b834 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.4B7 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 44adef24c08c8022306e1b7991a9b834 0 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 44adef24c08c8022306e1b7991a9b834 0 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=44adef24c08c8022306e1b7991a9b834 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.4B7 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.4B7 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # keys[0]=/tmp/spdk.key-null.4B7 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # gen_key sha512 64 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=f671617b651b0d46c0f8f52ee905dbb64d762f21f5863ff8df6d3079c2da1062 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.sze 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key f671617b651b0d46c0f8f52ee905dbb64d762f21f5863ff8df6d3079c2da1062 3 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 f671617b651b0d46c0f8f52ee905dbb64d762f21f5863ff8df6d3079c2da1062 3 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=f671617b651b0d46c0f8f52ee905dbb64d762f21f5863ff8df6d3079c2da1062 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.sze 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.sze 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@86 -- # ckeys[0]=/tmp/spdk.key-sha512.sze 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key null 48 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=3feb808ca6779e464530493298ea624a2af3f0ea48ea0122 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.f1E 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 3feb808ca6779e464530493298ea624a2af3f0ea48ea0122 0 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 3feb808ca6779e464530493298ea624a2af3f0ea48ea0122 0 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=3feb808ca6779e464530493298ea624a2af3f0ea48ea0122 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.f1E 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.f1E 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # keys[1]=/tmp/spdk.key-null.f1E 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # gen_key sha384 48 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=f35141bb1fc4d66b3fb66c5fe3891c5e48849ef19a2eec01 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.uRQ 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key f35141bb1fc4d66b3fb66c5fe3891c5e48849ef19a2eec01 2 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 f35141bb1fc4d66b3fb66c5fe3891c5e48849ef19a2eec01 2 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=f35141bb1fc4d66b3fb66c5fe3891c5e48849ef19a2eec01 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.uRQ 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.uRQ 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@87 -- # ckeys[1]=/tmp/spdk.key-sha384.uRQ 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=6088cb46aa1b7cb405c4f194a1da986b 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.vSW 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 6088cb46aa1b7cb405c4f194a1da986b 1 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 6088cb46aa1b7cb405c4f194a1da986b 1 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=6088cb46aa1b7cb405c4f194a1da986b 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:20:13.252 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.510 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.vSW 00:20:13.510 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.vSW 00:20:13.510 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # keys[2]=/tmp/spdk.key-sha256.vSW 00:20:13.510 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # gen_key sha256 32 00:20:13.510 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.510 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha256 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=9a3f49a78afa3ea98d2730d4c609f892 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha256.XXX 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha256.yLo 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 9a3f49a78afa3ea98d2730d4c609f892 1 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 9a3f49a78afa3ea98d2730d4c609f892 1 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=9a3f49a78afa3ea98d2730d4c609f892 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=1 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha256.yLo 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha256.yLo 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@88 -- # ckeys[2]=/tmp/spdk.key-sha256.yLo 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key sha384 48 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha384 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=48 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=23fc7b073aa27498f13c0e4419d87154cf3344272b1aa06d 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha384.XXX 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha384.TQm 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 23fc7b073aa27498f13c0e4419d87154cf3344272b1aa06d 2 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 23fc7b073aa27498f13c0e4419d87154cf3344272b1aa06d 2 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=23fc7b073aa27498f13c0e4419d87154cf3344272b1aa06d 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=2 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha384.TQm 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha384.TQm 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # keys[3]=/tmp/spdk.key-sha384.TQm 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # gen_key null 32 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=null 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=32 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=4696644afebcd5870dc81a74e8850a67 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-null.XXX 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-null.t7Y 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key 4696644afebcd5870dc81a74e8850a67 0 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 4696644afebcd5870dc81a74e8850a67 0 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=4696644afebcd5870dc81a74e8850a67 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=0 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-null.t7Y 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-null.t7Y 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@89 -- # ckeys[3]=/tmp/spdk.key-null.t7Y 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # gen_key sha512 64 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@55 -- # local digest len file key 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@56 -- # local -A digests 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # digest=sha512 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@58 -- # len=64 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@59 -- # key=e1e0144abcfb9423632ac35533ee1b881db9c001710319b793b9f34a52671f2b 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # mktemp -t spdk.key-sha512.XXX 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@60 -- # file=/tmp/spdk.key-sha512.17w 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@61 -- # format_dhchap_key e1e0144abcfb9423632ac35533ee1b881db9c001710319b793b9f34a52671f2b 3 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@719 -- # format_key DHHC-1 e1e0144abcfb9423632ac35533ee1b881db9c001710319b793b9f34a52671f2b 3 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@702 -- # local prefix key digest 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # key=e1e0144abcfb9423632ac35533ee1b881db9c001710319b793b9f34a52671f2b 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@704 -- # digest=3 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- nvmf/common.sh@705 -- # python - 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@62 -- # chmod 0600 /tmp/spdk.key-sha512.17w 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@64 -- # echo /tmp/spdk.key-sha512.17w 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # keys[4]=/tmp/spdk.key-sha512.17w 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@90 -- # ckeys[4]= 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@92 -- # waitforlisten 2870712 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@827 -- # '[' -z 2870712 ']' 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@832 -- # local max_retries=100 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@836 -- # xtrace_disable 00:20:13.511 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@860 -- # return 0 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.4B7 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha512.sze ]] 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.sze 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.f1E 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha384.uRQ ]] 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uRQ 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.vSW 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-sha256.yLo ]] 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.yLo 00:20:13.770 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.771 11:00:29 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:13.771 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.771 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:13.771 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.TQm 00:20:13.771 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.771 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:14.039 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n /tmp/spdk.key-null.t7Y ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.t7Y 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@93 -- # for i in "${!keys[@]}" 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@94 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.17w 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@95 -- # [[ -n '' ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@98 -- # nvmet_auth_init 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # get_main_ns_ip 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@639 -- # local block nvme 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@642 -- # modprobe nvmet 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:14.040 11:00:30 nvmf_tcp.nvmf_auth -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:20:15.422 Waiting for block devices as requested 00:20:15.422 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:20:15.422 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:15.422 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:15.422 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:15.422 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:15.680 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:15.680 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:15.680 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:15.680 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:15.939 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:20:15.939 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:20:15.939 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:20:15.939 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:20:16.196 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:20:16.196 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:20:16.196 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:20:16.196 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:20:16.784 No valid GPT data, bailing 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- scripts/common.sh@391 -- # pt= 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- scripts/common.sh@392 -- # return 1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@667 -- # echo 1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@669 -- # echo 1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@672 -- # echo tcp 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@673 -- # echo 4420 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@674 -- # echo ipv4 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:20:16.784 00:20:16.784 Discovery Log Number of Records 2, Generation counter 2 00:20:16.784 =====Discovery Log Entry 0====== 00:20:16.784 trtype: tcp 00:20:16.784 adrfam: ipv4 00:20:16.784 subtype: current discovery subsystem 00:20:16.784 treq: not specified, sq flow control disable supported 00:20:16.784 portid: 1 00:20:16.784 trsvcid: 4420 00:20:16.784 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:16.784 traddr: 10.0.0.1 00:20:16.784 eflags: none 00:20:16.784 sectype: none 00:20:16.784 =====Discovery Log Entry 1====== 00:20:16.784 trtype: tcp 00:20:16.784 adrfam: ipv4 00:20:16.784 subtype: nvme subsystem 00:20:16.784 treq: not specified, sq flow control disable supported 00:20:16.784 portid: 1 00:20:16.784 trsvcid: 4420 00:20:16.784 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:16.784 traddr: 10.0.0.1 00:20:16.784 eflags: none 00:20:16.784 sectype: none 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@37 -- # echo 0 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@101 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s sha256,sha384,sha512 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # IFS=, 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@107 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@106 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256,sha384,sha512 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:16.784 nvme0n1 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:16.784 11:00:32 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 0 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.049 nvme0n1 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 1 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.049 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.310 nvme0n1 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 2 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.310 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.571 nvme0n1 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:17.571 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 3 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.572 nvme0n1 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.572 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.830 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.830 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:17.830 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:17.830 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.830 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:17.830 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:17.830 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe2048 4 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 nvme0n1 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 11:00:33 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 0 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.831 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.089 nvme0n1 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 1 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.089 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.348 nvme0n1 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 2 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.348 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.606 nvme0n1 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 3 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.607 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.864 nvme0n1 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe3072 4 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.864 11:00:34 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.864 nvme0n1 00:20:18.864 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.864 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:18.864 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.864 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:18.864 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:18.865 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 0 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.122 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.381 nvme0n1 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 1 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.381 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.639 nvme0n1 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.639 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 2 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.640 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.898 nvme0n1 00:20:19.898 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.898 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:19.898 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.898 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.898 11:00:35 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:19.898 11:00:35 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 3 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.898 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.899 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.157 nvme0n1 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe4096 4 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.157 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.415 nvme0n1 00:20:20.415 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.415 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.415 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.415 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.415 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:20.415 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 0 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.674 11:00:36 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:20.932 nvme0n1 00:20:20.932 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.932 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:20.932 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.932 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:20.932 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 1 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.191 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:21.757 nvme0n1 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 2 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.757 11:00:37 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 nvme0n1 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 3 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.323 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:22.890 nvme0n1 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe6144 4 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.890 11:00:38 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:23.455 nvme0n1 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:23.455 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 0 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.456 11:00:39 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:24.387 nvme0n1 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 1 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.387 11:00:40 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:25.340 nvme0n1 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 2 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.340 11:00:41 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:26.272 nvme0n1 00:20:26.272 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.272 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:26.272 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.272 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:26.272 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:26.272 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.272 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 3 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.273 11:00:42 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:27.204 nvme0n1 00:20:27.204 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.204 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:27.204 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.204 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:27.204 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:27.204 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha256 ffdhe8192 4 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha256 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:27.461 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.462 11:00:43 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.396 nvme0n1 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 0 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.396 nvme0n1 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.396 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 1 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:28.654 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.655 nvme0n1 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 2 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.655 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.912 nvme0n1 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.912 11:00:44 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 3 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:28.912 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:28.913 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:28.913 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:28.913 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:28.913 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:28.913 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:28.913 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.913 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.170 nvme0n1 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe2048 4 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.170 nvme0n1 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 0 00:20:29.170 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:29.428 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.429 nvme0n1 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 1 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.429 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.687 nvme0n1 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 2 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.687 11:00:45 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.974 nvme0n1 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 3 00:20:29.974 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.975 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.235 nvme0n1 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe3072 4 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.235 nvme0n1 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.235 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.236 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:30.236 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.236 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.236 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 0 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.494 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.751 nvme0n1 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 1 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:30.751 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:30.752 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:30.752 11:00:46 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:30.752 11:00:46 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.752 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.752 11:00:46 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.009 nvme0n1 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 2 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.009 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.267 nvme0n1 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:31.267 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 3 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.268 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.525 nvme0n1 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.525 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe4096 4 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:31.783 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.784 11:00:47 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.043 nvme0n1 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 0 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.043 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.609 nvme0n1 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:32.609 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 1 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.610 11:00:48 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.173 nvme0n1 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.173 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 2 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.174 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.740 nvme0n1 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 3 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.740 11:00:49 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.307 nvme0n1 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe6144 4 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.307 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.874 nvme0n1 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 0 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.874 11:00:50 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.874 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.807 nvme0n1 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 1 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.807 11:00:51 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.739 nvme0n1 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 2 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.739 11:00:52 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.110 nvme0n1 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 3 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:38.110 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.111 11:00:53 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.676 nvme0n1 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha384 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha384 ffdhe8192 4 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha384 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.676 11:00:54 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.610 nvme0n1 00:20:39.610 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.610 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.610 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.610 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.610 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:39.610 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@113 -- # for digest in "${digests[@]}" 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 0 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.869 11:00:55 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.869 nvme0n1 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 1 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.869 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.135 nvme0n1 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 2 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.135 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.392 nvme0n1 00:20:40.392 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.392 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.392 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 3 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.393 nvme0n1 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe2048 4 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe2048 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.393 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.651 nvme0n1 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 0 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:40.651 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.652 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.910 nvme0n1 00:20:40.910 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.910 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:40.910 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.910 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.910 11:00:56 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:40.910 11:00:56 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:40.910 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 1 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.911 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.169 nvme0n1 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 2 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:41.169 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.170 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.428 nvme0n1 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 3 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:41.428 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.429 nvme0n1 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:41.429 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:41.686 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe3072 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe3072 4 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe3072 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.687 nvme0n1 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.687 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 0 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.945 11:00:57 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:41.945 nvme0n1 00:20:41.945 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.945 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:41.945 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:41.945 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.945 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 1 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.204 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.462 nvme0n1 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 2 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.462 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.463 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.721 nvme0n1 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 3 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.721 11:00:58 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.979 nvme0n1 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe4096 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:42.979 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe4096 4 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe4096 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.980 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.249 nvme0n1 00:20:43.249 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.249 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:43.250 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.250 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.250 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:43.250 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:43.535 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 0 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.536 11:00:59 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.102 nvme0n1 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:44.102 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 1 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.103 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.668 nvme0n1 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 2 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.669 11:01:00 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.240 nvme0n1 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 3 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.240 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.241 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.804 nvme0n1 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe6144 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe6144 4 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe6144 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:45.804 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.805 11:01:01 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.061 nvme0n1 00:20:46.061 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.061 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:46.061 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.061 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.061 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@114 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=0 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:NDRhZGVmMjRjMDhjODAyMjMwNmUxYjc5OTFhOWI4MzTFvE72: 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: ]] 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:03:ZjY3MTYxN2I2NTFiMGQ0NmMwZjhmNTJlZTkwNWRiYjY0ZDc2MmYyMWY1ODYzZmY4ZGY2ZDMwNzljMmRhMTA2MgrlwdU=: 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 0 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=0 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.318 11:01:02 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.248 nvme0n1 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 1 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:47.248 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=1 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.249 11:01:03 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.181 nvme0n1 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=2 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:01:NjA4OGNiNDZhYTFiN2NiNDA1YzRmMTk0YTFkYTk4NmJhhqWI: 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:01:OWEzZjQ5YTc4YWZhM2VhOThkMjczMGQ0YzYwOWY4OTLISVwn: 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 2 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=2 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.181 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.182 11:01:04 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.117 nvme0n1 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=3 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:02:MjNmYzdiMDczYWEyNzQ5OGYxM2MwZTQ0MTlkODcxNTRjZjMzNDQyNzJiMWFhMDZklID4vg==: 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:00:NDY5NjY0NGFmZWJjZDU4NzBkYzgxYTc0ZTg4NTBhNjeyuQEw: 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 3 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=3 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.117 11:01:05 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 nvme0n1 00:20:50.051 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.051 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.051 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.051 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:50.051 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.051 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@115 -- # for keyid in "${!keys[@]}" 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@116 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha512 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=4 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey= 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe8192 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:03:ZTFlMDE0NGFiY2ZiOTQyMzYzMmFjMzU1MzNlZTFiODgxZGI5YzAwMTcxMDMxOWI3OTNiOWYzNGE1MjY3MWYyYi9nV/Y=: 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@117 -- # connect_authenticate sha512 ffdhe8192 4 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@68 -- # local digest dhgroup keyid ckey 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # digest=sha512 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # dhgroup=ffdhe8192 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@70 -- # keyid=4 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@71 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # get_main_ns_ip 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.052 11:01:06 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.427 nvme0n1 00:20:51.427 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # jq -r '.[].name' 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@77 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@78 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@123 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # digest=sha256 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@44 -- # keyid=1 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@45 -- # key=DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@49 -- # echo ffdhe2048 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@50 -- # echo DHHC-1:00:M2ZlYjgwOGNhNjc3OWU0NjQ1MzA0OTMyOThlYTYyNGEyYWYzZjBlYTQ4ZWEwMTIyN9VTig==: 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@51 -- # echo DHHC-1:02:ZjM1MTQxYmIxZmM0ZDY2YjNmYjY2YzVmZTM4OTFjNWU0ODg0OWVmMTlhMmVlYzAxw8DOCA==: 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@124 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # get_main_ns_ip 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@125 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.428 request: 00:20:51.428 { 00:20:51.428 "name": "nvme0", 00:20:51.428 "trtype": "tcp", 00:20:51.428 "traddr": "10.0.0.1", 00:20:51.428 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:51.428 "adrfam": "ipv4", 00:20:51.428 "trsvcid": "4420", 00:20:51.428 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:51.428 "method": "bdev_nvme_attach_controller", 00:20:51.428 "req_id": 1 00:20:51.428 } 00:20:51.428 Got JSON-RPC error response 00:20:51.428 response: 00:20:51.428 { 00:20:51.428 "code": -32602, 00:20:51.428 "message": "Invalid parameters" 00:20:51.428 } 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # jq length 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@127 -- # (( 0 == 0 )) 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # get_main_ns_ip 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@130 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.428 request: 00:20:51.428 { 00:20:51.428 "name": "nvme0", 00:20:51.428 "trtype": "tcp", 00:20:51.428 "traddr": "10.0.0.1", 00:20:51.428 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:51.428 "adrfam": "ipv4", 00:20:51.428 "trsvcid": "4420", 00:20:51.428 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:51.428 "dhchap_key": "key2", 00:20:51.428 "method": "bdev_nvme_attach_controller", 00:20:51.428 "req_id": 1 00:20:51.428 } 00:20:51.428 Got JSON-RPC error response 00:20:51.428 response: 00:20:51.428 { 00:20:51.428 "code": -32602, 00:20:51.428 "message": "Invalid parameters" 00:20:51.428 } 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # jq length 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@133 -- # (( 0 == 0 )) 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # get_main_ns_ip 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@728 -- # local ip 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # ip_candidates=() 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@729 -- # local -A ip_candidates 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@648 -- # local es=0 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:51.428 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:51.429 request: 00:20:51.429 { 00:20:51.429 "name": "nvme0", 00:20:51.429 "trtype": "tcp", 00:20:51.429 "traddr": "10.0.0.1", 00:20:51.429 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:20:51.429 "adrfam": "ipv4", 00:20:51.429 "trsvcid": "4420", 00:20:51.429 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:20:51.429 "dhchap_key": "key1", 00:20:51.429 "dhchap_ctrlr_key": "ckey2", 00:20:51.429 "method": "bdev_nvme_attach_controller", 00:20:51.429 "req_id": 1 00:20:51.429 } 00:20:51.429 Got JSON-RPC error response 00:20:51.429 response: 00:20:51.429 { 00:20:51.429 "code": -32602, 00:20:51.429 "message": "Invalid parameters" 00:20:51.429 } 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@651 -- # es=1 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@140 -- # trap - SIGINT SIGTERM EXIT 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@141 -- # cleanup 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- host/auth.sh@24 -- # nvmftestfini 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@117 -- # sync 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@120 -- # set +e 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:51.429 rmmod nvme_tcp 00:20:51.429 rmmod nvme_fabrics 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@124 -- # set -e 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@125 -- # return 0 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@489 -- # '[' -n 2870712 ']' 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@490 -- # killprocess 2870712 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@946 -- # '[' -z 2870712 ']' 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@950 -- # kill -0 2870712 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # uname 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2870712 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2870712' 00:20:51.429 killing process with pid 2870712 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@965 -- # kill 2870712 00:20:51.429 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@970 -- # wait 2870712 00:20:51.689 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:51.689 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:51.689 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:51.689 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:51.689 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:51.689 11:01:07 nvmf_tcp.nvmf_auth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.689 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:51.689 11:01:07 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- host/auth.sh@27 -- # clean_kernel_target 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@686 -- # echo 0 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:20:54.222 11:01:09 nvmf_tcp.nvmf_auth -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:55.157 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:55.157 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:55.157 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:55.157 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:55.157 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:55.157 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:55.157 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:55.157 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:55.157 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:20:55.157 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:20:55.157 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:20:55.157 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:20:55.157 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:20:55.157 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:20:55.157 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:20:55.157 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:20:56.093 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:20:56.093 11:01:12 nvmf_tcp.nvmf_auth -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.4B7 /tmp/spdk.key-null.f1E /tmp/spdk.key-sha256.vSW /tmp/spdk.key-sha384.TQm /tmp/spdk.key-sha512.17w /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:20:56.093 11:01:12 nvmf_tcp.nvmf_auth -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:20:57.482 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:57.482 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:20:57.482 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:57.482 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:57.482 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:57.482 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:57.482 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:57.482 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:57.482 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:57.482 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:20:57.482 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:20:57.482 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:20:57.482 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:20:57.482 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:20:57.482 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:20:57.482 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:20:57.482 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:20:57.741 00:20:57.741 real 0m47.518s 00:20:57.741 user 0m44.677s 00:20:57.741 sys 0m6.204s 00:20:57.741 11:01:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@1122 -- # xtrace_disable 00:20:57.741 11:01:13 nvmf_tcp.nvmf_auth -- common/autotest_common.sh@10 -- # set +x 00:20:57.741 ************************************ 00:20:57.741 END TEST nvmf_auth 00:20:57.741 ************************************ 00:20:57.741 11:01:13 nvmf_tcp -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:20:57.741 11:01:13 nvmf_tcp -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:57.741 11:01:13 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:20:57.741 11:01:13 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:20:57.741 11:01:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:57.741 ************************************ 00:20:57.741 START TEST nvmf_digest 00:20:57.741 ************************************ 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:20:57.741 * Looking for test storage... 00:20:57.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:57.741 11:01:13 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:20:57.742 11:01:13 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:00.318 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:00.318 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:00.318 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:00.318 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:00.319 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:00.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:00.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:21:00.319 00:21:00.319 --- 10.0.0.2 ping statistics --- 00:21:00.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.319 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:00.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:00.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:21:00.319 00:21:00.319 --- 10.0.0.1 ping statistics --- 00:21:00.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:00.319 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:00.319 ************************************ 00:21:00.319 START TEST nvmf_digest_clean 00:21:00.319 ************************************ 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1121 -- # run_digest 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2880477 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2880477 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2880477 ']' 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:00.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:00.319 11:01:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:00.319 [2024-05-15 11:01:16.461596] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:00.319 [2024-05-15 11:01:16.461688] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:00.319 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.319 [2024-05-15 11:01:16.543087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.577 [2024-05-15 11:01:16.663054] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:00.577 [2024-05-15 11:01:16.663127] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:00.577 [2024-05-15 11:01:16.663143] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:00.577 [2024-05-15 11:01:16.663157] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:00.577 [2024-05-15 11:01:16.663169] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:00.577 [2024-05-15 11:01:16.663209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:01.510 null0 00:21:01.510 [2024-05-15 11:01:17.585268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:01.510 [2024-05-15 11:01:17.609250] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:01.510 [2024-05-15 11:01:17.609527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2880630 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2880630 /var/tmp/bperf.sock 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2880630 ']' 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:01.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:01.510 11:01:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:01.510 [2024-05-15 11:01:17.658331] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:01.510 [2024-05-15 11:01:17.658423] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2880630 ] 00:21:01.510 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.510 [2024-05-15 11:01:17.733538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:01.768 [2024-05-15 11:01:17.857268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.702 11:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:02.702 11:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:02.702 11:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:02.702 11:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:02.702 11:01:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:02.960 11:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:02.960 11:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:03.526 nvme0n1 00:21:03.526 11:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:03.526 11:01:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:03.526 Running I/O for 2 seconds... 00:21:05.424 00:21:05.424 Latency(us) 00:21:05.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.424 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:05.424 nvme0n1 : 2.01 18766.96 73.31 0.00 0.00 6810.79 3179.71 18447.17 00:21:05.424 =================================================================================================================== 00:21:05.424 Total : 18766.96 73.31 0.00 0.00 6810.79 3179.71 18447.17 00:21:05.424 0 00:21:05.424 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:05.424 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:05.424 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:05.424 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:05.424 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:05.424 | select(.opcode=="crc32c") 00:21:05.424 | "\(.module_name) \(.executed)"' 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2880630 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2880630 ']' 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2880630 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2880630 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2880630' 00:21:05.682 killing process with pid 2880630 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2880630 00:21:05.682 Received shutdown signal, test time was about 2.000000 seconds 00:21:05.682 00:21:05.682 Latency(us) 00:21:05.682 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:05.682 =================================================================================================================== 00:21:05.682 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:05.682 11:01:21 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2880630 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2881170 00:21:05.940 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:05.941 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2881170 /var/tmp/bperf.sock 00:21:05.941 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2881170 ']' 00:21:05.941 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:05.941 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:05.941 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:05.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:05.941 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:05.941 11:01:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:06.198 [2024-05-15 11:01:22.205881] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:06.198 [2024-05-15 11:01:22.205983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881170 ] 00:21:06.198 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:06.198 Zero copy mechanism will not be used. 00:21:06.198 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.198 [2024-05-15 11:01:22.278640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.198 [2024-05-15 11:01:22.399429] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.132 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:07.132 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:07.132 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:07.132 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:07.132 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:07.391 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:07.391 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:07.648 nvme0n1 00:21:07.648 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:07.648 11:01:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:07.906 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:07.906 Zero copy mechanism will not be used. 00:21:07.906 Running I/O for 2 seconds... 00:21:09.805 00:21:09.805 Latency(us) 00:21:09.805 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.805 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:09.805 nvme0n1 : 2.01 1921.21 240.15 0.00 0.00 8322.13 8009.96 11602.30 00:21:09.805 =================================================================================================================== 00:21:09.805 Total : 1921.21 240.15 0.00 0.00 8322.13 8009.96 11602.30 00:21:09.805 0 00:21:09.805 11:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:09.805 11:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:09.805 11:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:09.805 11:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:09.805 11:01:25 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:09.805 | select(.opcode=="crc32c") 00:21:09.805 | "\(.module_name) \(.executed)"' 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2881170 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2881170 ']' 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2881170 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2881170 00:21:10.062 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:10.063 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:10.063 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2881170' 00:21:10.063 killing process with pid 2881170 00:21:10.063 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2881170 00:21:10.063 Received shutdown signal, test time was about 2.000000 seconds 00:21:10.063 00:21:10.063 Latency(us) 00:21:10.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.063 =================================================================================================================== 00:21:10.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:10.063 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2881170 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2881703 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2881703 /var/tmp/bperf.sock 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2881703 ']' 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:10.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:10.630 11:01:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:10.630 [2024-05-15 11:01:26.600804] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:10.630 [2024-05-15 11:01:26.600900] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2881703 ] 00:21:10.630 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.630 [2024-05-15 11:01:26.674537] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.630 [2024-05-15 11:01:26.791856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.564 11:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:11.564 11:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:11.564 11:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:11.564 11:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:11.564 11:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:11.822 11:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:11.822 11:01:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:12.079 nvme0n1 00:21:12.079 11:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:12.079 11:01:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:12.079 Running I/O for 2 seconds... 00:21:14.609 00:21:14.609 Latency(us) 00:21:14.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.609 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:14.609 nvme0n1 : 2.01 20300.24 79.30 0.00 0.00 6294.54 4004.98 11893.57 00:21:14.609 =================================================================================================================== 00:21:14.609 Total : 20300.24 79.30 0.00 0.00 6294.54 4004.98 11893.57 00:21:14.609 0 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:14.609 | select(.opcode=="crc32c") 00:21:14.609 | "\(.module_name) \(.executed)"' 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2881703 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2881703 ']' 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2881703 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2881703 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2881703' 00:21:14.609 killing process with pid 2881703 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2881703 00:21:14.609 Received shutdown signal, test time was about 2.000000 seconds 00:21:14.609 00:21:14.609 Latency(us) 00:21:14.609 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.609 =================================================================================================================== 00:21:14.609 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.609 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2881703 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2882247 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2882247 /var/tmp/bperf.sock 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@827 -- # '[' -z 2882247 ']' 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:14.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:14.867 11:01:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:14.867 [2024-05-15 11:01:30.939321] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:14.867 [2024-05-15 11:01:30.939421] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882247 ] 00:21:14.867 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:14.867 Zero copy mechanism will not be used. 00:21:14.867 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.867 [2024-05-15 11:01:31.013350] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.125 [2024-05-15 11:01:31.134443] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.125 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:15.125 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # return 0 00:21:15.125 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:15.125 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:15.125 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:15.422 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:15.422 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:16.005 nvme0n1 00:21:16.005 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:16.005 11:01:31 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:16.005 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:16.005 Zero copy mechanism will not be used. 00:21:16.005 Running I/O for 2 seconds... 00:21:17.909 00:21:17.909 Latency(us) 00:21:17.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.909 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:17.909 nvme0n1 : 2.01 1211.58 151.45 0.00 0.00 13154.42 4563.25 15922.82 00:21:17.909 =================================================================================================================== 00:21:17.909 Total : 1211.58 151.45 0.00 0.00 13154.42 4563.25 15922.82 00:21:17.909 0 00:21:17.909 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:17.909 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:17.909 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:17.909 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:17.909 | select(.opcode=="crc32c") 00:21:17.909 | "\(.module_name) \(.executed)"' 00:21:17.910 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2882247 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2882247 ']' 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2882247 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2882247 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2882247' 00:21:18.167 killing process with pid 2882247 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2882247 00:21:18.167 Received shutdown signal, test time was about 2.000000 seconds 00:21:18.167 00:21:18.167 Latency(us) 00:21:18.167 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.167 =================================================================================================================== 00:21:18.167 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:18.167 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2882247 00:21:18.424 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2880477 00:21:18.424 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@946 -- # '[' -z 2880477 ']' 00:21:18.424 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # kill -0 2880477 00:21:18.424 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # uname 00:21:18.424 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:18.424 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2880477 00:21:18.682 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:18.682 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:18.682 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2880477' 00:21:18.682 killing process with pid 2880477 00:21:18.682 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@965 -- # kill 2880477 00:21:18.682 [2024-05-15 11:01:34.665119] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:18.682 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # wait 2880477 00:21:18.940 00:21:18.940 real 0m18.517s 00:21:18.940 user 0m37.486s 00:21:18.940 sys 0m3.827s 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:18.940 ************************************ 00:21:18.940 END TEST nvmf_digest_clean 00:21:18.940 ************************************ 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:18.940 ************************************ 00:21:18.940 START TEST nvmf_digest_error 00:21:18.940 ************************************ 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1121 -- # run_digest_error 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2882801 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2882801 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2882801 ']' 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:18.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:18.940 11:01:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:18.940 [2024-05-15 11:01:35.037231] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:18.940 [2024-05-15 11:01:35.037322] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:18.940 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.940 [2024-05-15 11:01:35.124918] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.276 [2024-05-15 11:01:35.247576] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.276 [2024-05-15 11:01:35.247641] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.276 [2024-05-15 11:01:35.247657] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.276 [2024-05-15 11:01:35.247671] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.276 [2024-05-15 11:01:35.247683] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.276 [2024-05-15 11:01:35.247723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:19.276 [2024-05-15 11:01:35.308314] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:19.276 null0 00:21:19.276 [2024-05-15 11:01:35.431556] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.276 [2024-05-15 11:01:35.455538] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:19.276 [2024-05-15 11:01:35.455813] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2882826 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2882826 /var/tmp/bperf.sock 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2882826 ']' 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:19.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:19.276 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:19.276 [2024-05-15 11:01:35.504251] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:19.276 [2024-05-15 11:01:35.504326] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2882826 ] 00:21:19.532 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.532 [2024-05-15 11:01:35.583569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.532 [2024-05-15 11:01:35.703730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.790 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:19.790 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:19.790 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:19.790 11:01:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:20.046 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:20.046 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.046 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:20.046 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.046 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:20.046 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:20.313 nvme0n1 00:21:20.313 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:20.313 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.313 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:20.313 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.314 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:20.314 11:01:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:20.314 Running I/O for 2 seconds... 00:21:20.314 [2024-05-15 11:01:36.534549] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.314 [2024-05-15 11:01:36.534613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.314 [2024-05-15 11:01:36.534638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.550491] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.550529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.550550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.563569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.563605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.563625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.577131] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.577166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.577189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.591695] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.591729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.591764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.605423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.605457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.605477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.618827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.618862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.618881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.632661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.632696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.632715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.646256] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.646290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.646315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.659337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.659371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.581 [2024-05-15 11:01:36.659391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.581 [2024-05-15 11:01:36.674371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.581 [2024-05-15 11:01:36.674405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.674425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.685435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.685468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.685487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.700672] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.700706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.700725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.714828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.714868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.714888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.728427] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.728461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.728480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.742050] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.742084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.742103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.755515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.755550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.755570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.767109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.767143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.767162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.783224] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.783258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.783277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.795776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.795810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.795830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.582 [2024-05-15 11:01:36.809441] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.582 [2024-05-15 11:01:36.809474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.582 [2024-05-15 11:01:36.809494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.824371] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.824408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.824429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.836484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.836519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.836538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.851062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.851096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.851115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.864219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.864252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.864272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.879109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.879142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:24522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.879162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.890622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.890655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.890675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.906332] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.906365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.906385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.918552] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.918587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.918607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.932502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.932535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.932554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.947596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.947630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:23870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.947656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.960619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.960653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.960673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.974643] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.974677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.974696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:36.986676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:36.986710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:36.986730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:37.000586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:37.000619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:37.000639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:37.015061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:37.015095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:37.015115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:37.028828] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:37.028862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:37.028882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:37.041753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:37.041787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:37.041806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:37.056185] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:37.056220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:37.056240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:20.841 [2024-05-15 11:01:37.068609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:20.841 [2024-05-15 11:01:37.068650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:20.841 [2024-05-15 11:01:37.068671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.083655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.083691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:2605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.083712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.096360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.096395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8082 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.096415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.110259] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.110295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.110316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.124342] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.124377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:9478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.124398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.137883] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.137918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.137947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.150365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.150399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.150419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.165047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.165081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.165101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.178753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.178787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.178812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.193007] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.193041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.193060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.205175] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.205208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.205228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.219758] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.219791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.219811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.232015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.232048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.232067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.246909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.246949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.246970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.260864] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.260897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.260916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.274433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.274467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.274486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.287158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.287191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.287210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.301670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.301709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.301729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.313820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.313854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.313873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.100 [2024-05-15 11:01:37.330334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.100 [2024-05-15 11:01:37.330379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.100 [2024-05-15 11:01:37.330410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.342656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.342691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:7468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.342711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.356752] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.356786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.356805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.369508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.369542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.369562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.384090] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.384123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.384143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.398422] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.398456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.398475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.412143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.412188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.412208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.425727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.425760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.425780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.438295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.438329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.438349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.452424] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.452458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.452478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.466274] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.466308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.466327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.479435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.479469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.479488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.493592] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.493626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.493646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.506923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.506964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.506990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.521262] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.521295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.521314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.533947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.533981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.534006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.359 [2024-05-15 11:01:37.547372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.359 [2024-05-15 11:01:37.547406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.359 [2024-05-15 11:01:37.547425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.360 [2024-05-15 11:01:37.561387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.360 [2024-05-15 11:01:37.561419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-05-15 11:01:37.561439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.360 [2024-05-15 11:01:37.575351] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.360 [2024-05-15 11:01:37.575385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-05-15 11:01:37.575404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.360 [2024-05-15 11:01:37.588827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.360 [2024-05-15 11:01:37.588862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.360 [2024-05-15 11:01:37.588883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.618 [2024-05-15 11:01:37.602848] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.618 [2024-05-15 11:01:37.602884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.618 [2024-05-15 11:01:37.602904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.618 [2024-05-15 11:01:37.614906] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.618 [2024-05-15 11:01:37.614949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.618 [2024-05-15 11:01:37.614970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.618 [2024-05-15 11:01:37.630194] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.618 [2024-05-15 11:01:37.630228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:11058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.618 [2024-05-15 11:01:37.630248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.618 [2024-05-15 11:01:37.644178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.618 [2024-05-15 11:01:37.644212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.618 [2024-05-15 11:01:37.644232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.618 [2024-05-15 11:01:37.657311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.618 [2024-05-15 11:01:37.657350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.618 [2024-05-15 11:01:37.657371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.618 [2024-05-15 11:01:37.671546] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.618 [2024-05-15 11:01:37.671580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.618 [2024-05-15 11:01:37.671599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.618 [2024-05-15 11:01:37.684117] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.618 [2024-05-15 11:01:37.684151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.618 [2024-05-15 11:01:37.684170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.618 [2024-05-15 11:01:37.699663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.618 [2024-05-15 11:01:37.699697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.699717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.712163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.712198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.712217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.727078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.727112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.727132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.741083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.741118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.741137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.753870] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.753905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.753924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.768258] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.768292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.768312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.780923] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.780977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.780995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.792540] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.792571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.792589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.805084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.805113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.805129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.818446] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.818477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.818495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.831734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.831764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:24392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.831782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.619 [2024-05-15 11:01:37.843482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.619 [2024-05-15 11:01:37.843525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.619 [2024-05-15 11:01:37.843541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.857068] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.857101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.857119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.870102] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.870133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.870151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.882807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.882836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.882875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.894508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.894538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.894556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.907252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.907284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.907301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.919801] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.919848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:4586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.919865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.933053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.933095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.933114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.944730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.944761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.944779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.957147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.957176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.957193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.969986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.970017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.970035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.983190] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.983219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.983236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:37.995950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:37.995980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:37.995998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.007095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.007125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.007143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.020076] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.020106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.020124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.034023] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.034052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.034070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.046508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.046536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.046553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.059729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.059760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.059777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.070442] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.070472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.070490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.083868] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.083896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.083912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.097180] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.097211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.097235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:21.878 [2024-05-15 11:01:38.109349] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:21.878 [2024-05-15 11:01:38.109382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:21.878 [2024-05-15 11:01:38.109401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.122909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.122948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.122968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.136021] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.136052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.136070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.147704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.147735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.147753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.161087] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.161119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.161137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.173841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.173873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.173890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.185699] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.185731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:3760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.185748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.198869] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.198901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.198918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.212095] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.212132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.212150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.224416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.224446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.224463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.236862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.236893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.236911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.249440] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.249472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.249490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.262394] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.262425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.262442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.274863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.274894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.274911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.287996] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.288026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.288044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.299884] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.299928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.299958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.312599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.312630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.312648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.325987] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.326019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.137 [2024-05-15 11:01:38.326037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.137 [2024-05-15 11:01:38.338515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.137 [2024-05-15 11:01:38.338546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-05-15 11:01:38.338563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.138 [2024-05-15 11:01:38.350578] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.138 [2024-05-15 11:01:38.350609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:21722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-05-15 11:01:38.350626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.138 [2024-05-15 11:01:38.364458] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.138 [2024-05-15 11:01:38.364490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:277 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.138 [2024-05-15 11:01:38.364507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.375621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.375669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.375687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.388570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.388603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.388620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.402894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.402948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.402979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.415885] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.415934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.415955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.426803] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.426846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.426869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.439636] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.439667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.439685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.452473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.452504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.452521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.465533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.465563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:7214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.465581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.478201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.478231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.478249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.489755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.489786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.489804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.502508] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.502538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.502556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 [2024-05-15 11:01:38.515290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x865950) 00:21:22.396 [2024-05-15 11:01:38.515321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:22.396 [2024-05-15 11:01:38.515339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:22.396 00:21:22.396 Latency(us) 00:21:22.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.396 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:22.396 nvme0n1 : 2.00 19164.74 74.86 0.00 0.00 6668.71 3398.16 17087.91 00:21:22.396 =================================================================================================================== 00:21:22.396 Total : 19164.74 74.86 0.00 0.00 6668.71 3398.16 17087.91 00:21:22.396 0 00:21:22.396 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:22.396 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:22.396 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:22.396 | .driver_specific 00:21:22.396 | .nvme_error 00:21:22.396 | .status_code 00:21:22.396 | .command_transient_transport_error' 00:21:22.396 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 150 > 0 )) 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2882826 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2882826 ']' 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2882826 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2882826 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2882826' 00:21:22.654 killing process with pid 2882826 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2882826 00:21:22.654 Received shutdown signal, test time was about 2.000000 seconds 00:21:22.654 00:21:22.654 Latency(us) 00:21:22.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:22.654 =================================================================================================================== 00:21:22.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:22.654 11:01:38 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2882826 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2883238 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2883238 /var/tmp/bperf.sock 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2883238 ']' 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:22.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:22.912 11:01:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:22.912 [2024-05-15 11:01:39.137321] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:22.912 [2024-05-15 11:01:39.137408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883238 ] 00:21:22.912 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:22.912 Zero copy mechanism will not be used. 00:21:23.171 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.171 [2024-05-15 11:01:39.209585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.171 [2024-05-15 11:01:39.323312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.103 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:24.103 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:24.103 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:24.103 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:24.361 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:24.361 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.361 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:24.361 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.361 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.361 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.928 nvme0n1 00:21:24.928 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:24.928 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.928 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:24.928 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.928 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:24.928 11:01:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.928 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.928 Zero copy mechanism will not be used. 00:21:24.928 Running I/O for 2 seconds... 00:21:24.928 [2024-05-15 11:01:41.000834] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.928 [2024-05-15 11:01:41.000894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.928 [2024-05-15 11:01:41.000919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.017862] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.017898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.017917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.034600] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.034635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.034655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.051563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.051596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.051614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.068344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.068378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.068396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.085060] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.085089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.085104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.101770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.101803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.101821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.118582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.118616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.118635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.135242] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.135270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.135302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:24.929 [2024-05-15 11:01:41.151946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:24.929 [2024-05-15 11:01:41.151978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.929 [2024-05-15 11:01:41.152011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.187 [2024-05-15 11:01:41.168863] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.187 [2024-05-15 11:01:41.168900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.187 [2024-05-15 11:01:41.168925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.187 [2024-05-15 11:01:41.185629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.187 [2024-05-15 11:01:41.185664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.187 [2024-05-15 11:01:41.185683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.187 [2024-05-15 11:01:41.202577] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.187 [2024-05-15 11:01:41.202610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.187 [2024-05-15 11:01:41.202629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.219582] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.219617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.219636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.236231] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.236263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.236282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.252904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.252945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.252979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.269622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.269654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.269673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.286318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.286351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.286369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.303154] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.303182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.303198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.320192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.320242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.320261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.336881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.336914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.336941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.353647] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.353680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.353698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.370373] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.370405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.370424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.386895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.386927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.386997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.402973] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.403020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.403036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.188 [2024-05-15 11:01:41.419143] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.188 [2024-05-15 11:01:41.419176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.188 [2024-05-15 11:01:41.419193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.447 [2024-05-15 11:01:41.435339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.447 [2024-05-15 11:01:41.435375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.447 [2024-05-15 11:01:41.435394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.447 [2024-05-15 11:01:41.451502] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.447 [2024-05-15 11:01:41.451536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.447 [2024-05-15 11:01:41.451554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.447 [2024-05-15 11:01:41.467569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.447 [2024-05-15 11:01:41.467602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.447 [2024-05-15 11:01:41.467621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.447 [2024-05-15 11:01:41.482895] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.447 [2024-05-15 11:01:41.482927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.447 [2024-05-15 11:01:41.482954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.447 [2024-05-15 11:01:41.498155] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.447 [2024-05-15 11:01:41.498183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.447 [2024-05-15 11:01:41.498199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.447 [2024-05-15 11:01:41.513462] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.447 [2024-05-15 11:01:41.513494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.447 [2024-05-15 11:01:41.513512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.447 [2024-05-15 11:01:41.528924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.447 [2024-05-15 11:01:41.528965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.447 [2024-05-15 11:01:41.528983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.447 [2024-05-15 11:01:41.544920] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.447 [2024-05-15 11:01:41.544978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.544994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.448 [2024-05-15 11:01:41.560898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.448 [2024-05-15 11:01:41.560939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.560960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.448 [2024-05-15 11:01:41.576881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.448 [2024-05-15 11:01:41.576914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.576941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.448 [2024-05-15 11:01:41.592382] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.448 [2024-05-15 11:01:41.592415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.592443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.448 [2024-05-15 11:01:41.607621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.448 [2024-05-15 11:01:41.607664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.607678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.448 [2024-05-15 11:01:41.623123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.448 [2024-05-15 11:01:41.623168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.623184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.448 [2024-05-15 11:01:41.638856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.448 [2024-05-15 11:01:41.638889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.638907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.448 [2024-05-15 11:01:41.654300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.448 [2024-05-15 11:01:41.654334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.654353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.448 [2024-05-15 11:01:41.669887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.448 [2024-05-15 11:01:41.669921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.448 [2024-05-15 11:01:41.669948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.685451] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.685488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.685507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.701297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.701331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.701350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.717356] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.717388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.717407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.733607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.733645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.733665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.750026] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.750069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.750085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.766509] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.766541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.766559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.783167] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.783209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.783224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.799722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.799754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.799772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.816079] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.816122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.816137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.832756] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.706 [2024-05-15 11:01:41.832789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.706 [2024-05-15 11:01:41.832807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.706 [2024-05-15 11:01:41.849140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.707 [2024-05-15 11:01:41.849168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.707 [2024-05-15 11:01:41.849184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.707 [2024-05-15 11:01:41.865726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.707 [2024-05-15 11:01:41.865757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.707 [2024-05-15 11:01:41.865775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.707 [2024-05-15 11:01:41.882111] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.707 [2024-05-15 11:01:41.882139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.707 [2024-05-15 11:01:41.882155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.707 [2024-05-15 11:01:41.898661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.707 [2024-05-15 11:01:41.898693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.707 [2024-05-15 11:01:41.898712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.707 [2024-05-15 11:01:41.915151] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.707 [2024-05-15 11:01:41.915178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.707 [2024-05-15 11:01:41.915194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.707 [2024-05-15 11:01:41.931749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.707 [2024-05-15 11:01:41.931782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.707 [2024-05-15 11:01:41.931801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:41.948344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:41.948379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:41.948399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:41.964762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:41.964796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:41.964816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:41.981048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:41.981079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:41.981096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:41.997077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:41.997106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:41.997128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.012321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.012360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.012379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.027533] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.027566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.027596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.043433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.043468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.043497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.059034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.059078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.059101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.074790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.074824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.074851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.090947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.090993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.091016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.107211] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.107244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.107274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.123455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.123488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.123509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.139468] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.139500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.139529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.155519] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.155552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.155571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.171830] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.171861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.171879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:25.965 [2024-05-15 11:01:42.188463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:25.965 [2024-05-15 11:01:42.188495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:25.965 [2024-05-15 11:01:42.188513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.205270] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.205315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.205339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.221815] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.221849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.221868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.238363] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.238396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.238415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.254698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.254731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.254749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.271137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.271165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.271181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.287587] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.287620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.287645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.303882] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.303913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.303938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.320517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.320549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.320567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.337000] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.337029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.337045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.353753] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.353786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.353805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.369955] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.369987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.370019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.386015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.386043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.386059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.401426] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.401458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.401477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.417311] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.417344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.417362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.432795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.432833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.432852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.224 [2024-05-15 11:01:42.448566] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.224 [2024-05-15 11:01:42.448599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.224 [2024-05-15 11:01:42.448617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.463992] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.464023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.464039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.479433] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.479467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.479487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.495028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.495057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.495073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.510938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.510971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.511003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.526851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.526884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.526903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.542856] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.542889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.542907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.558927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.558966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.558997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.574767] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.574800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.574818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.590034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.590062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.590077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.605252] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.605294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.605313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.620472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.620505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.620524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.636249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.636291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.636310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.651670] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.651703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.651721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.667419] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.667451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.667469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.683489] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.683522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.683540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.482 [2024-05-15 11:01:42.699738] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.482 [2024-05-15 11:01:42.699771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.482 [2024-05-15 11:01:42.699795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.740 [2024-05-15 11:01:42.716064] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.740 [2024-05-15 11:01:42.716094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.740 [2024-05-15 11:01:42.716110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.740 [2024-05-15 11:01:42.732619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.740 [2024-05-15 11:01:42.732654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.740 [2024-05-15 11:01:42.732673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.740 [2024-05-15 11:01:42.749207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.740 [2024-05-15 11:01:42.749235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.749252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.765640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.765674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.765693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.781995] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.782024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.782040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.798372] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.798405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.798424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.815103] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.815132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.815148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.831536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.831569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.831586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.848776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.848810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.848828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.865755] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.865788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.865807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.883077] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.883109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.883128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.900177] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.900220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.900235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.917503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.917538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.917557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.934390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.934424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.934443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.951232] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.951280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.951298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:26.741 [2024-05-15 11:01:42.968660] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.741 [2024-05-15 11:01:42.968693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.741 [2024-05-15 11:01:42.968712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:26.999 [2024-05-15 11:01:42.985570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x7dc850) 00:21:26.999 [2024-05-15 11:01:42.985607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:26.999 [2024-05-15 11:01:42.985635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:26.999 00:21:26.999 Latency(us) 00:21:26.999 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.999 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:26.999 nvme0n1 : 2.00 1904.88 238.11 0.00 0.00 8391.32 7524.50 17379.18 00:21:26.999 =================================================================================================================== 00:21:26.999 Total : 1904.88 238.11 0.00 0.00 8391.32 7524.50 17379.18 00:21:26.999 0 00:21:26.999 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:26.999 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:26.999 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:26.999 | .driver_specific 00:21:26.999 | .nvme_error 00:21:26.999 | .status_code 00:21:26.999 | .command_transient_transport_error' 00:21:26.999 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 123 > 0 )) 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2883238 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2883238 ']' 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2883238 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2883238 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2883238' 00:21:27.256 killing process with pid 2883238 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2883238 00:21:27.256 Received shutdown signal, test time was about 2.000000 seconds 00:21:27.256 00:21:27.256 Latency(us) 00:21:27.256 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:27.256 =================================================================================================================== 00:21:27.256 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:27.256 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2883238 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2883774 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2883774 /var/tmp/bperf.sock 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2883774 ']' 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:27.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:27.514 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:27.514 [2024-05-15 11:01:43.578834] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:27.514 [2024-05-15 11:01:43.578936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2883774 ] 00:21:27.514 EAL: No free 2048 kB hugepages reported on node 1 00:21:27.514 [2024-05-15 11:01:43.652107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.772 [2024-05-15 11:01:43.761066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.772 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:27.772 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:27.772 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:27.772 11:01:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:28.030 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:28.030 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.030 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.030 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.030 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:28.030 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:28.605 nvme0n1 00:21:28.605 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:28.605 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.605 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:28.605 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.605 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:28.605 11:01:44 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:28.605 Running I/O for 2 seconds... 00:21:28.605 [2024-05-15 11:01:44.759872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.605 [2024-05-15 11:01:44.761289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.605 [2024-05-15 11:01:44.761337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.605 [2024-05-15 11:01:44.777245] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.605 [2024-05-15 11:01:44.777597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.605 [2024-05-15 11:01:44.777631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.605 [2024-05-15 11:01:44.794140] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.605 [2024-05-15 11:01:44.794503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.605 [2024-05-15 11:01:44.794537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.605 [2024-05-15 11:01:44.811084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.605 [2024-05-15 11:01:44.811445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.605 [2024-05-15 11:01:44.811477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.605 [2024-05-15 11:01:44.827941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.605 [2024-05-15 11:01:44.828293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.605 [2024-05-15 11:01:44.828326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.844691] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.845051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.845081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.861241] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.861582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.861615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.877872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.878251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.878279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.894340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.894677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.894708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.910859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.911242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.911292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.927354] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.927690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.927722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.943852] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.944207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.944252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.960317] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.960656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.960688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.976776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.977108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.977150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:44.993279] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:44.993617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:44.993649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:45.009634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:45.009969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:45.010015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:45.026016] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:45.026346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.863 [2024-05-15 11:01:45.026377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.863 [2024-05-15 11:01:45.042338] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.863 [2024-05-15 11:01:45.042673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.864 [2024-05-15 11:01:45.042703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.864 [2024-05-15 11:01:45.058756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.864 [2024-05-15 11:01:45.059124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.864 [2024-05-15 11:01:45.059151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.864 [2024-05-15 11:01:45.075551] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.864 [2024-05-15 11:01:45.075887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.864 [2024-05-15 11:01:45.075918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:28.864 [2024-05-15 11:01:45.092070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:28.864 [2024-05-15 11:01:45.092412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:28.864 [2024-05-15 11:01:45.092445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.108448] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.108785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:6929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.108819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.124908] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.125261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.125306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.141364] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.141699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.141730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.157996] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.158332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.158363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.174488] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.174829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.174860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.191025] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.191382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.191413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.207375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.207714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.207744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.223776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.224140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.224166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.240359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.240698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:9469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.240729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.256865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.257245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.257272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.273384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.273720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.273752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.289959] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.290312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.290342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.306406] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.306742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.306773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.322853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.323210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.323254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.122 [2024-05-15 11:01:45.339242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.122 [2024-05-15 11:01:45.339562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:14307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.122 [2024-05-15 11:01:45.339609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.355623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.380 [2024-05-15 11:01:45.355983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:10908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.380 [2024-05-15 11:01:45.356012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.372110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.380 [2024-05-15 11:01:45.372468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.380 [2024-05-15 11:01:45.372502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.388619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.380 [2024-05-15 11:01:45.388955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:18291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.380 [2024-05-15 11:01:45.389000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.405113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.380 [2024-05-15 11:01:45.405463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.380 [2024-05-15 11:01:45.405495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.421461] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.380 [2024-05-15 11:01:45.421801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.380 [2024-05-15 11:01:45.421832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.437969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.380 [2024-05-15 11:01:45.438328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.380 [2024-05-15 11:01:45.438359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.454427] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.380 [2024-05-15 11:01:45.454764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.380 [2024-05-15 11:01:45.454795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.470921] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.380 [2024-05-15 11:01:45.471273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.380 [2024-05-15 11:01:45.471304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.380 [2024-05-15 11:01:45.487296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.381 [2024-05-15 11:01:45.487651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.381 [2024-05-15 11:01:45.487681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.381 [2024-05-15 11:01:45.503746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.381 [2024-05-15 11:01:45.504127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.381 [2024-05-15 11:01:45.504153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.381 [2024-05-15 11:01:45.520324] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.381 [2024-05-15 11:01:45.520661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.381 [2024-05-15 11:01:45.520691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.381 [2024-05-15 11:01:45.536717] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.381 [2024-05-15 11:01:45.537092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.381 [2024-05-15 11:01:45.537119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.381 [2024-05-15 11:01:45.553308] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.381 [2024-05-15 11:01:45.553648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.381 [2024-05-15 11:01:45.553678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.381 [2024-05-15 11:01:45.569828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.381 [2024-05-15 11:01:45.570220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.381 [2024-05-15 11:01:45.570263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.381 [2024-05-15 11:01:45.586462] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.381 [2024-05-15 11:01:45.586800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.381 [2024-05-15 11:01:45.586831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.381 [2024-05-15 11:01:45.602771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.381 [2024-05-15 11:01:45.603125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.381 [2024-05-15 11:01:45.603152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.638 [2024-05-15 11:01:45.619393] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.638 [2024-05-15 11:01:45.619732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:2316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.638 [2024-05-15 11:01:45.619765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.638 [2024-05-15 11:01:45.635839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.638 [2024-05-15 11:01:45.636183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.638 [2024-05-15 11:01:45.636215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.638 [2024-05-15 11:01:45.652246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.638 [2024-05-15 11:01:45.652591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.638 [2024-05-15 11:01:45.652623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.638 [2024-05-15 11:01:45.668615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.638 [2024-05-15 11:01:45.668975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.638 [2024-05-15 11:01:45.669002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.685178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.685523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.685554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.701669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.702017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.702044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.718180] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.718520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.718551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.734677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.735025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.735052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.751227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.751582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.751616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.767683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.768045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:14887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.768075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.784183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.784534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.784566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.800751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.801102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.801130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.817320] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.817657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.817688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.833658] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.834006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.834034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.850125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.850496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.850528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.639 [2024-05-15 11:01:45.866594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.639 [2024-05-15 11:01:45.866941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.639 [2024-05-15 11:01:45.866993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:45.882940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:45.883281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:45.883327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:45.899468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:45.899805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:45.899836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:45.915788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:45.916154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:45.916183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:45.932220] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:45.932567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:45.932598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:45.948576] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:45.948916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:45.948970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:45.965005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:45.965328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:45.965359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:45.981411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:45.981749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:45.981780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:45.997828] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:45.998199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:45.998227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:46.014134] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:46.014486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:46.014518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:46.030459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:46.030799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:46.030830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:46.046762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:46.047119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:46.047146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:46.063005] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:46.063339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:46.063370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:46.079506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:46.079845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:46.079876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:46.095962] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:46.096316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:11804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.897 [2024-05-15 11:01:46.096342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.897 [2024-05-15 11:01:46.112564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.897 [2024-05-15 11:01:46.112903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.898 [2024-05-15 11:01:46.112942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:29.898 [2024-05-15 11:01:46.128994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:29.898 [2024-05-15 11:01:46.129329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:29.898 [2024-05-15 11:01:46.129367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.145451] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.145788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.145821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.161877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.162235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.162277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.178225] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.178559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.178590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.194759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.195114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.195145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.211291] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.211640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.211673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.227798] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.228163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:16073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.228190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.244233] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.244589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.244620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.260669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.261021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.261052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.277165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.277534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.277565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.293605] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.293955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.294001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.309995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.310360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.310391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.326485] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.326826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.326858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.342767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.343131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.343159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.359218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.359568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:9796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.359599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.156 [2024-05-15 11:01:46.375636] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.156 [2024-05-15 11:01:46.375982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.156 [2024-05-15 11:01:46.376010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.392195] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.392554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.392588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.408532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.408871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.408903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.425135] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.425472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.425503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.441586] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.441924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.441977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.458207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.458559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.458590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.474750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.475100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.475128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.491264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.491613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.491644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.507773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.508124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.508152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.524087] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.524419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.524450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.414 [2024-05-15 11:01:46.540560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.414 [2024-05-15 11:01:46.540894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.414 [2024-05-15 11:01:46.540925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.415 [2024-05-15 11:01:46.556900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.415 [2024-05-15 11:01:46.557261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.415 [2024-05-15 11:01:46.557292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.415 [2024-05-15 11:01:46.573447] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.415 [2024-05-15 11:01:46.573781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.415 [2024-05-15 11:01:46.573813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.415 [2024-05-15 11:01:46.589964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.415 [2024-05-15 11:01:46.590307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.415 [2024-05-15 11:01:46.590339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.415 [2024-05-15 11:01:46.606623] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.415 [2024-05-15 11:01:46.606982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.415 [2024-05-15 11:01:46.607009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.415 [2024-05-15 11:01:46.623178] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.415 [2024-05-15 11:01:46.623536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:18317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.415 [2024-05-15 11:01:46.623572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.415 [2024-05-15 11:01:46.639813] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.415 [2024-05-15 11:01:46.640195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.415 [2024-05-15 11:01:46.640238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.673 [2024-05-15 11:01:46.656172] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.673 [2024-05-15 11:01:46.656527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.673 [2024-05-15 11:01:46.656560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.673 [2024-05-15 11:01:46.672686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.673 [2024-05-15 11:01:46.673027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.673 [2024-05-15 11:01:46.673069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.673 [2024-05-15 11:01:46.689229] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.673 [2024-05-15 11:01:46.689593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.673 [2024-05-15 11:01:46.689625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.673 [2024-05-15 11:01:46.705619] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.673 [2024-05-15 11:01:46.705961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.674 [2024-05-15 11:01:46.706005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.674 [2024-05-15 11:01:46.722091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.674 [2024-05-15 11:01:46.722436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.674 [2024-05-15 11:01:46.722467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.674 [2024-05-15 11:01:46.738530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1190) with pdu=0x2000190fd640 00:21:30.674 [2024-05-15 11:01:46.738870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:30.674 [2024-05-15 11:01:46.738901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:30.674 00:21:30.674 Latency(us) 00:21:30.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.674 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:30.674 nvme0n1 : 2.01 15450.49 60.35 0.00 0.00 8262.82 3640.89 16990.81 00:21:30.674 =================================================================================================================== 00:21:30.674 Total : 15450.49 60.35 0.00 0.00 8262.82 3640.89 16990.81 00:21:30.674 0 00:21:30.674 11:01:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:30.674 11:01:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:30.674 11:01:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:30.674 11:01:46 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:30.674 | .driver_specific 00:21:30.674 | .nvme_error 00:21:30.674 | .status_code 00:21:30.674 | .command_transient_transport_error' 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 121 > 0 )) 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2883774 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2883774 ']' 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2883774 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2883774 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2883774' 00:21:30.932 killing process with pid 2883774 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2883774 00:21:30.932 Received shutdown signal, test time was about 2.000000 seconds 00:21:30.932 00:21:30.932 Latency(us) 00:21:30.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.932 =================================================================================================================== 00:21:30.932 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.932 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2883774 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2884234 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2884234 /var/tmp/bperf.sock 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@827 -- # '[' -z 2884234 ']' 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:31.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:31.191 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:31.191 [2024-05-15 11:01:47.332536] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:31.191 [2024-05-15 11:01:47.332633] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2884234 ] 00:21:31.191 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:31.191 Zero copy mechanism will not be used. 00:21:31.191 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.191 [2024-05-15 11:01:47.409185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.449 [2024-05-15 11:01:47.527569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.449 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:31.449 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # return 0 00:21:31.449 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:31.449 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:31.707 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:31.707 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.707 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:31.707 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.707 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.707 11:01:47 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.273 nvme0n1 00:21:32.273 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:32.273 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.273 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:32.273 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.273 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:32.273 11:01:48 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:32.273 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:32.273 Zero copy mechanism will not be used. 00:21:32.273 Running I/O for 2 seconds... 00:21:32.532 [2024-05-15 11:01:48.531500] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.532016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.532071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.532 [2024-05-15 11:01:48.559362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.560066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.560099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.532 [2024-05-15 11:01:48.585088] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.585802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.585836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.532 [2024-05-15 11:01:48.611366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.612055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.612085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.532 [2024-05-15 11:01:48.640070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.640655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.640700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.532 [2024-05-15 11:01:48.668268] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.668952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.668998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.532 [2024-05-15 11:01:48.693457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.694326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.694371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.532 [2024-05-15 11:01:48.716151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.716923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.716960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.532 [2024-05-15 11:01:48.743662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.532 [2024-05-15 11:01:48.744344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.532 [2024-05-15 11:01:48.744374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.790 [2024-05-15 11:01:48.768804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.790 [2024-05-15 11:01:48.769422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.790 [2024-05-15 11:01:48.769467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.790 [2024-05-15 11:01:48.795137] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.790 [2024-05-15 11:01:48.795878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.790 [2024-05-15 11:01:48.795923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.790 [2024-05-15 11:01:48.821034] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.790 [2024-05-15 11:01:48.821741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.790 [2024-05-15 11:01:48.821772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.790 [2024-05-15 11:01:48.846439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.790 [2024-05-15 11:01:48.846824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.790 [2024-05-15 11:01:48.846854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.790 [2024-05-15 11:01:48.869234] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.790 [2024-05-15 11:01:48.869801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.790 [2024-05-15 11:01:48.869830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.790 [2024-05-15 11:01:48.892330] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.790 [2024-05-15 11:01:48.892807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.790 [2024-05-15 11:01:48.892850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.790 [2024-05-15 11:01:48.917584] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.790 [2024-05-15 11:01:48.918322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.790 [2024-05-15 11:01:48.918367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:32.790 [2024-05-15 11:01:48.944545] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.790 [2024-05-15 11:01:48.945118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.790 [2024-05-15 11:01:48.945164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:32.791 [2024-05-15 11:01:48.970312] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.791 [2024-05-15 11:01:48.970776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.791 [2024-05-15 11:01:48.970821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:32.791 [2024-05-15 11:01:48.996374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.791 [2024-05-15 11:01:48.997179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.791 [2024-05-15 11:01:48.997210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:32.791 [2024-05-15 11:01:49.020970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:32.791 [2024-05-15 11:01:49.021893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:32.791 [2024-05-15 11:01:49.021941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.049 [2024-05-15 11:01:49.046579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.049 [2024-05-15 11:01:49.047358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.049 [2024-05-15 11:01:49.047404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.049 [2024-05-15 11:01:49.070070] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.049 [2024-05-15 11:01:49.070528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.049 [2024-05-15 11:01:49.070572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.049 [2024-05-15 11:01:49.095941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.049 [2024-05-15 11:01:49.096408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.049 [2024-05-15 11:01:49.096449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.049 [2024-05-15 11:01:49.117811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.049 [2024-05-15 11:01:49.118416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.049 [2024-05-15 11:01:49.118461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.049 [2024-05-15 11:01:49.141530] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.049 [2024-05-15 11:01:49.142238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-05-15 11:01:49.142266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.050 [2024-05-15 11:01:49.167961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.050 [2024-05-15 11:01:49.168504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-05-15 11:01:49.168534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.050 [2024-05-15 11:01:49.190355] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.050 [2024-05-15 11:01:49.190939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-05-15 11:01:49.190969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.050 [2024-05-15 11:01:49.213663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.050 [2024-05-15 11:01:49.214383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-05-15 11:01:49.214413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.050 [2024-05-15 11:01:49.239776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.050 [2024-05-15 11:01:49.240533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-05-15 11:01:49.240562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.050 [2024-05-15 11:01:49.263844] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.050 [2024-05-15 11:01:49.264316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.050 [2024-05-15 11:01:49.264360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.309 [2024-05-15 11:01:49.289788] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.309 [2024-05-15 11:01:49.290394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.309 [2024-05-15 11:01:49.290441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.309 [2024-05-15 11:01:49.313487] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.309 [2024-05-15 11:01:49.314067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.309 [2024-05-15 11:01:49.314111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.309 [2024-05-15 11:01:49.335577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.309 [2024-05-15 11:01:49.336132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.309 [2024-05-15 11:01:49.336177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.309 [2024-05-15 11:01:49.362189] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.309 [2024-05-15 11:01:49.362905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.309 [2024-05-15 11:01:49.362953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.309 [2024-05-15 11:01:49.388598] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.309 [2024-05-15 11:01:49.389171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.309 [2024-05-15 11:01:49.389204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.309 [2024-05-15 11:01:49.415894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.309 [2024-05-15 11:01:49.416595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.309 [2024-05-15 11:01:49.416628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.309 [2024-05-15 11:01:49.444255] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.309 [2024-05-15 11:01:49.444698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.309 [2024-05-15 11:01:49.444739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.309 [2024-05-15 11:01:49.471823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.309 [2024-05-15 11:01:49.472665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.309 [2024-05-15 11:01:49.472693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.310 [2024-05-15 11:01:49.499650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.310 [2024-05-15 11:01:49.500253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.310 [2024-05-15 11:01:49.500301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.310 [2024-05-15 11:01:49.527835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.310 [2024-05-15 11:01:49.528359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.310 [2024-05-15 11:01:49.528394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.556217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.556724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.556759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.581991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.582637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.582669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.608438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.609395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.609428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.637812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.638443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.638477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.666637] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.667183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.667211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.695445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.696076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.696120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.721370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.722183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.722228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.748068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.748868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.748900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.568 [2024-05-15 11:01:49.777614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.568 [2024-05-15 11:01:49.778275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.568 [2024-05-15 11:01:49.778309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:49.804624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:49.805354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:49.805390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:49.832001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:49.832723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:49.832757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:49.860558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:49.861406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:49.861440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:49.888177] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:49.888810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:49.888843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:49.917099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:49.918020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:49.918050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:49.945438] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:49.946141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:49.946184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:49.974599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:49.975504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:49.975537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:50.003199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:50.003662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:50.003696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:50.030812] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:50.031441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:50.031495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:33.827 [2024-05-15 11:01:50.054681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:33.827 [2024-05-15 11:01:50.055237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:33.827 [2024-05-15 11:01:50.055291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.085 [2024-05-15 11:01:50.078870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.085 [2024-05-15 11:01:50.079602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.085 [2024-05-15 11:01:50.079641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.085 [2024-05-15 11:01:50.105857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.085 [2024-05-15 11:01:50.106801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.085 [2024-05-15 11:01:50.106836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.085 [2024-05-15 11:01:50.132894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.085 [2024-05-15 11:01:50.133363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.085 [2024-05-15 11:01:50.133398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.085 [2024-05-15 11:01:50.162066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.085 [2024-05-15 11:01:50.162659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.085 [2024-05-15 11:01:50.162702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.085 [2024-05-15 11:01:50.189558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.085 [2024-05-15 11:01:50.190266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.086 [2024-05-15 11:01:50.190296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.086 [2024-05-15 11:01:50.218801] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.086 [2024-05-15 11:01:50.219412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.086 [2024-05-15 11:01:50.219447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.086 [2024-05-15 11:01:50.246804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.086 [2024-05-15 11:01:50.247547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.086 [2024-05-15 11:01:50.247582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.086 [2024-05-15 11:01:50.275693] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.086 [2024-05-15 11:01:50.276136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.086 [2024-05-15 11:01:50.276181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.086 [2024-05-15 11:01:50.305963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.086 [2024-05-15 11:01:50.306501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.086 [2024-05-15 11:01:50.306535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.344 [2024-05-15 11:01:50.332633] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.344 [2024-05-15 11:01:50.333486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.344 [2024-05-15 11:01:50.333522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.344 [2024-05-15 11:01:50.361141] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.344 [2024-05-15 11:01:50.362130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.344 [2024-05-15 11:01:50.362175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.344 [2024-05-15 11:01:50.390162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.344 [2024-05-15 11:01:50.390868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.344 [2024-05-15 11:01:50.390902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.344 [2024-05-15 11:01:50.418495] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.344 [2024-05-15 11:01:50.419327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.344 [2024-05-15 11:01:50.419362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:34.344 [2024-05-15 11:01:50.446631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.344 [2024-05-15 11:01:50.447329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.344 [2024-05-15 11:01:50.447359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:34.344 [2024-05-15 11:01:50.474283] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.344 [2024-05-15 11:01:50.475349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.344 [2024-05-15 11:01:50.475395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:34.344 [2024-05-15 11:01:50.501803] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20f1670) with pdu=0x2000190fef90 00:21:34.344 [2024-05-15 11:01:50.502490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.344 [2024-05-15 11:01:50.502520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:34.344 00:21:34.344 Latency(us) 00:21:34.344 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.344 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:34.344 nvme0n1 : 2.01 1166.13 145.77 0.00 0.00 13671.62 4733.16 29709.65 00:21:34.344 =================================================================================================================== 00:21:34.344 Total : 1166.13 145.77 0.00 0.00 13671.62 4733.16 29709.65 00:21:34.344 0 00:21:34.344 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:34.344 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:34.344 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:34.344 | .driver_specific 00:21:34.344 | .nvme_error 00:21:34.344 | .status_code 00:21:34.344 | .command_transient_transport_error' 00:21:34.344 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 75 > 0 )) 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2884234 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2884234 ']' 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2884234 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2884234 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2884234' 00:21:34.602 killing process with pid 2884234 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2884234 00:21:34.602 Received shutdown signal, test time was about 2.000000 seconds 00:21:34.602 00:21:34.602 Latency(us) 00:21:34.602 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.602 =================================================================================================================== 00:21:34.602 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.602 11:01:50 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2884234 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2882801 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@946 -- # '[' -z 2882801 ']' 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # kill -0 2882801 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # uname 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2882801 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2882801' 00:21:35.168 killing process with pid 2882801 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # kill 2882801 00:21:35.168 [2024-05-15 11:01:51.134323] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:35.168 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # wait 2882801 00:21:35.426 00:21:35.426 real 0m16.420s 00:21:35.426 user 0m33.294s 00:21:35.426 sys 0m3.970s 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.426 ************************************ 00:21:35.426 END TEST nvmf_digest_error 00:21:35.426 ************************************ 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:35.426 rmmod nvme_tcp 00:21:35.426 rmmod nvme_fabrics 00:21:35.426 rmmod nvme_keyring 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2882801 ']' 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2882801 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@946 -- # '[' -z 2882801 ']' 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@950 -- # kill -0 2882801 00:21:35.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2882801) - No such process 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@973 -- # echo 'Process with pid 2882801 is not found' 00:21:35.426 Process with pid 2882801 is not found 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.426 11:01:51 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.326 11:01:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:37.326 00:21:37.326 real 0m39.705s 00:21:37.326 user 1m11.787s 00:21:37.326 sys 0m9.563s 00:21:37.326 11:01:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:21:37.326 11:01:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:37.326 ************************************ 00:21:37.326 END TEST nvmf_digest 00:21:37.326 ************************************ 00:21:37.326 11:01:53 nvmf_tcp -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:21:37.326 11:01:53 nvmf_tcp -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:21:37.326 11:01:53 nvmf_tcp -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:21:37.326 11:01:53 nvmf_tcp -- nvmf/nvmf.sh@120 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:37.326 11:01:53 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:21:37.326 11:01:53 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:21:37.326 11:01:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:37.584 ************************************ 00:21:37.584 START TEST nvmf_bdevperf 00:21:37.584 ************************************ 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:21:37.584 * Looking for test storage... 00:21:37.584 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:21:37.584 11:01:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:40.111 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:40.111 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:40.111 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:40.111 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.111 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:40.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:21:40.112 00:21:40.112 --- 10.0.0.2 ping statistics --- 00:21:40.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.112 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:21:40.112 00:21:40.112 --- 10.0.0.1 ping statistics --- 00:21:40.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.112 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2886948 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2886948 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2886948 ']' 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:40.112 11:01:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:40.370 [2024-05-15 11:01:56.355118] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:40.370 [2024-05-15 11:01:56.355200] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.370 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.370 [2024-05-15 11:01:56.434758] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:40.370 [2024-05-15 11:01:56.542141] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.370 [2024-05-15 11:01:56.542193] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.370 [2024-05-15 11:01:56.542230] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.370 [2024-05-15 11:01:56.542242] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.370 [2024-05-15 11:01:56.542254] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.370 [2024-05-15 11:01:56.542348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:40.370 [2024-05-15 11:01:56.542416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.370 [2024-05-15 11:01:56.542413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.303 [2024-05-15 11:01:57.329853] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.303 Malloc0 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:41.303 [2024-05-15 11:01:57.398047] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:41.303 [2024-05-15 11:01:57.398366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:41.303 { 00:21:41.303 "params": { 00:21:41.303 "name": "Nvme$subsystem", 00:21:41.303 "trtype": "$TEST_TRANSPORT", 00:21:41.303 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:41.303 "adrfam": "ipv4", 00:21:41.303 "trsvcid": "$NVMF_PORT", 00:21:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:41.303 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:41.303 "hdgst": ${hdgst:-false}, 00:21:41.303 "ddgst": ${ddgst:-false} 00:21:41.303 }, 00:21:41.303 "method": "bdev_nvme_attach_controller" 00:21:41.303 } 00:21:41.303 EOF 00:21:41.303 )") 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:21:41.303 11:01:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:41.303 "params": { 00:21:41.303 "name": "Nvme1", 00:21:41.303 "trtype": "tcp", 00:21:41.303 "traddr": "10.0.0.2", 00:21:41.303 "adrfam": "ipv4", 00:21:41.303 "trsvcid": "4420", 00:21:41.303 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.303 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.303 "hdgst": false, 00:21:41.303 "ddgst": false 00:21:41.303 }, 00:21:41.303 "method": "bdev_nvme_attach_controller" 00:21:41.303 }' 00:21:41.303 [2024-05-15 11:01:57.447057] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:41.303 [2024-05-15 11:01:57.447129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887101 ] 00:21:41.303 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.303 [2024-05-15 11:01:57.520971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.561 [2024-05-15 11:01:57.634633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.820 Running I/O for 1 seconds... 00:21:42.786 00:21:42.786 Latency(us) 00:21:42.786 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.786 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:42.786 Verification LBA range: start 0x0 length 0x4000 00:21:42.786 Nvme1n1 : 1.01 8579.06 33.51 0.00 0.00 14827.91 2682.12 18544.26 00:21:42.786 =================================================================================================================== 00:21:42.786 Total : 8579.06 33.51 0.00 0.00 14827.91 2682.12 18544.26 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2887361 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.044 { 00:21:43.044 "params": { 00:21:43.044 "name": "Nvme$subsystem", 00:21:43.044 "trtype": "$TEST_TRANSPORT", 00:21:43.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.044 "adrfam": "ipv4", 00:21:43.044 "trsvcid": "$NVMF_PORT", 00:21:43.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.044 "hdgst": ${hdgst:-false}, 00:21:43.044 "ddgst": ${ddgst:-false} 00:21:43.044 }, 00:21:43.044 "method": "bdev_nvme_attach_controller" 00:21:43.044 } 00:21:43.044 EOF 00:21:43.044 )") 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:21:43.044 11:01:59 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:43.044 "params": { 00:21:43.044 "name": "Nvme1", 00:21:43.044 "trtype": "tcp", 00:21:43.044 "traddr": "10.0.0.2", 00:21:43.044 "adrfam": "ipv4", 00:21:43.044 "trsvcid": "4420", 00:21:43.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.044 "hdgst": false, 00:21:43.044 "ddgst": false 00:21:43.044 }, 00:21:43.044 "method": "bdev_nvme_attach_controller" 00:21:43.044 }' 00:21:43.044 [2024-05-15 11:01:59.263256] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:43.044 [2024-05-15 11:01:59.263360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2887361 ] 00:21:43.302 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.302 [2024-05-15 11:01:59.334267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.302 [2024-05-15 11:01:59.443706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.559 Running I/O for 15 seconds... 00:21:46.088 11:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2886948 00:21:46.088 11:02:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:21:46.088 [2024-05-15 11:02:02.235374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.235953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.235990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.089 [2024-05-15 11:02:02.236741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.089 [2024-05-15 11:02:02.236758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.236775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.236793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.236809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.236826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.236843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.236860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.236876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.236893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.236909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.236926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.236950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.236968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.236999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.090 [2024-05-15 11:02:02.237301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.090 [2024-05-15 11:02:02.237335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.090 [2024-05-15 11:02:02.237368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.090 [2024-05-15 11:02:02.237401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.090 [2024-05-15 11:02:02.237435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.090 [2024-05-15 11:02:02.237468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.090 [2024-05-15 11:02:02.237979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.090 [2024-05-15 11:02:02.237995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.238967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.238999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.239016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.239031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.239047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.239061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.239077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.239092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.239108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.239123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.239139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.091 [2024-05-15 11:02:02.239154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.091 [2024-05-15 11:02:02.239170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.092 [2024-05-15 11:02:02.239185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.092 [2024-05-15 11:02:02.239230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.092 [2024-05-15 11:02:02.239260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.092 [2024-05-15 11:02:02.239308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.092 [2024-05-15 11:02:02.239861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.239887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10770 is same with the state(5) to be set 00:21:46.092 [2024-05-15 11:02:02.239908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.092 [2024-05-15 11:02:02.239922] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.092 [2024-05-15 11:02:02.240127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39336 len:8 PRP1 0x0 PRP2 0x0 00:21:46.092 [2024-05-15 11:02:02.240146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.240230] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d10770 was disconnected and freed. reset controller. 00:21:46.092 [2024-05-15 11:02:02.240308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.092 [2024-05-15 11:02:02.240333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.240350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.092 [2024-05-15 11:02:02.240366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.240381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.092 [2024-05-15 11:02:02.240397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.240412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.092 [2024-05-15 11:02:02.240428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.092 [2024-05-15 11:02:02.240443] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.092 [2024-05-15 11:02:02.244070] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.092 [2024-05-15 11:02:02.244107] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.092 [2024-05-15 11:02:02.245038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.092 [2024-05-15 11:02:02.245068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.092 [2024-05-15 11:02:02.245086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.092 [2024-05-15 11:02:02.245338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.092 [2024-05-15 11:02:02.245586] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.092 [2024-05-15 11:02:02.245609] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.092 [2024-05-15 11:02:02.245635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.092 [2024-05-15 11:02:02.249259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.092 [2024-05-15 11:02:02.258299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.092 [2024-05-15 11:02:02.258819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.092 [2024-05-15 11:02:02.258851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.092 [2024-05-15 11:02:02.258870] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.092 [2024-05-15 11:02:02.259124] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.092 [2024-05-15 11:02:02.259370] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.092 [2024-05-15 11:02:02.259395] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.092 [2024-05-15 11:02:02.259411] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.092 [2024-05-15 11:02:02.263040] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.092 [2024-05-15 11:02:02.272243] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.093 [2024-05-15 11:02:02.272724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.093 [2024-05-15 11:02:02.272757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.093 [2024-05-15 11:02:02.272777] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.093 [2024-05-15 11:02:02.273030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.093 [2024-05-15 11:02:02.273277] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.093 [2024-05-15 11:02:02.273303] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.093 [2024-05-15 11:02:02.273320] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.093 [2024-05-15 11:02:02.276948] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.093 [2024-05-15 11:02:02.286142] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.093 [2024-05-15 11:02:02.286724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.093 [2024-05-15 11:02:02.286778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.093 [2024-05-15 11:02:02.286810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.093 [2024-05-15 11:02:02.287078] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.093 [2024-05-15 11:02:02.287327] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.093 [2024-05-15 11:02:02.287353] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.093 [2024-05-15 11:02:02.287370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.093 [2024-05-15 11:02:02.291003] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.093 [2024-05-15 11:02:02.300200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.093 [2024-05-15 11:02:02.300729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.093 [2024-05-15 11:02:02.300763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.093 [2024-05-15 11:02:02.300782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.093 [2024-05-15 11:02:02.301038] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.093 [2024-05-15 11:02:02.301285] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.093 [2024-05-15 11:02:02.301311] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.093 [2024-05-15 11:02:02.301328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.093 [2024-05-15 11:02:02.304958] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.093 [2024-05-15 11:02:02.314153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.093 [2024-05-15 11:02:02.314718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.093 [2024-05-15 11:02:02.314752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.093 [2024-05-15 11:02:02.314770] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.093 [2024-05-15 11:02:02.315059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.093 [2024-05-15 11:02:02.315315] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.093 [2024-05-15 11:02:02.315342] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.093 [2024-05-15 11:02:02.315358] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.352 [2024-05-15 11:02:02.319016] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.352 [2024-05-15 11:02:02.328252] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.352 [2024-05-15 11:02:02.328719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.352 [2024-05-15 11:02:02.328753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.352 [2024-05-15 11:02:02.328771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.352 [2024-05-15 11:02:02.329024] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.352 [2024-05-15 11:02:02.329272] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.352 [2024-05-15 11:02:02.329298] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.352 [2024-05-15 11:02:02.329315] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.352 [2024-05-15 11:02:02.332945] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.352 [2024-05-15 11:02:02.342140] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.352 [2024-05-15 11:02:02.342627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.352 [2024-05-15 11:02:02.342655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.352 [2024-05-15 11:02:02.342671] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.352 [2024-05-15 11:02:02.342923] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.352 [2024-05-15 11:02:02.343190] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.352 [2024-05-15 11:02:02.343217] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.352 [2024-05-15 11:02:02.343233] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.352 [2024-05-15 11:02:02.346854] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.352 [2024-05-15 11:02:02.356053] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.352 [2024-05-15 11:02:02.356558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.352 [2024-05-15 11:02:02.356586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.352 [2024-05-15 11:02:02.356602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.352 [2024-05-15 11:02:02.356851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.352 [2024-05-15 11:02:02.357111] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.352 [2024-05-15 11:02:02.357138] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.352 [2024-05-15 11:02:02.357154] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.352 [2024-05-15 11:02:02.360775] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.352 [2024-05-15 11:02:02.369976] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.352 [2024-05-15 11:02:02.370474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.352 [2024-05-15 11:02:02.370507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.352 [2024-05-15 11:02:02.370525] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.352 [2024-05-15 11:02:02.370767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.352 [2024-05-15 11:02:02.371034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.352 [2024-05-15 11:02:02.371061] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.352 [2024-05-15 11:02:02.371078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.352 [2024-05-15 11:02:02.374697] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.352 [2024-05-15 11:02:02.384035] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.352 [2024-05-15 11:02:02.384538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.352 [2024-05-15 11:02:02.384571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.352 [2024-05-15 11:02:02.384589] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.352 [2024-05-15 11:02:02.384831] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.352 [2024-05-15 11:02:02.385091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.352 [2024-05-15 11:02:02.385118] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.352 [2024-05-15 11:02:02.385134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.352 [2024-05-15 11:02:02.388761] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.352 [2024-05-15 11:02:02.397962] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.352 [2024-05-15 11:02:02.398455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.352 [2024-05-15 11:02:02.398487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.352 [2024-05-15 11:02:02.398505] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.352 [2024-05-15 11:02:02.398748] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.352 [2024-05-15 11:02:02.399007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.352 [2024-05-15 11:02:02.399034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.352 [2024-05-15 11:02:02.399050] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.352 [2024-05-15 11:02:02.402670] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.352 [2024-05-15 11:02:02.411864] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.352 [2024-05-15 11:02:02.412380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.352 [2024-05-15 11:02:02.412412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.352 [2024-05-15 11:02:02.412430] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.352 [2024-05-15 11:02:02.412673] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.352 [2024-05-15 11:02:02.412918] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.352 [2024-05-15 11:02:02.412956] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.352 [2024-05-15 11:02:02.412975] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.352 [2024-05-15 11:02:02.416597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.352 [2024-05-15 11:02:02.425784] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.352 [2024-05-15 11:02:02.426296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.426329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.426347] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.426590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.426835] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.426861] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.426877] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.430506] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.439699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.440230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.440263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.440279] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.440540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.440787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.440812] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.440829] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.444462] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.453655] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.454245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.454278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.454296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.454538] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.454784] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.454810] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.454826] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.458460] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.467653] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.468136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.468168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.468186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.468428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.468673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.468698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.468715] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.472354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.481550] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.482033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.482067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.482087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.482331] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.482584] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.482610] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.482626] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.486259] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.495597] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.496076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.496111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.496132] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.496393] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.496645] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.496671] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.496688] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.500367] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.509716] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.510190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.510224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.510243] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.510486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.510733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.510760] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.510776] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.514444] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.523757] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.524240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.524268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.524285] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.524536] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.524794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.524821] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.524837] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.528496] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.537694] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.538242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.538272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.538288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.538552] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.538799] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.538825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.538842] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.353 [2024-05-15 11:02:02.542476] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.353 [2024-05-15 11:02:02.551672] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.353 [2024-05-15 11:02:02.552154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.353 [2024-05-15 11:02:02.552186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.353 [2024-05-15 11:02:02.552205] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.353 [2024-05-15 11:02:02.552448] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.353 [2024-05-15 11:02:02.552694] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.353 [2024-05-15 11:02:02.552719] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.353 [2024-05-15 11:02:02.552735] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.354 [2024-05-15 11:02:02.556366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.354 [2024-05-15 11:02:02.565768] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.354 [2024-05-15 11:02:02.566273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.354 [2024-05-15 11:02:02.566306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.354 [2024-05-15 11:02:02.566324] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.354 [2024-05-15 11:02:02.566565] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.354 [2024-05-15 11:02:02.566811] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.354 [2024-05-15 11:02:02.566837] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.354 [2024-05-15 11:02:02.566854] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.354 [2024-05-15 11:02:02.570484] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.354 [2024-05-15 11:02:02.579680] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.354 [2024-05-15 11:02:02.580145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.354 [2024-05-15 11:02:02.580173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.354 [2024-05-15 11:02:02.580193] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.354 [2024-05-15 11:02:02.580430] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.354 [2024-05-15 11:02:02.580682] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.354 [2024-05-15 11:02:02.580708] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.354 [2024-05-15 11:02:02.580723] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.612 [2024-05-15 11:02:02.584396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.612 [2024-05-15 11:02:02.593615] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.612 [2024-05-15 11:02:02.594135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.612 [2024-05-15 11:02:02.594167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.612 [2024-05-15 11:02:02.594186] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.612 [2024-05-15 11:02:02.594428] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.612 [2024-05-15 11:02:02.594673] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.612 [2024-05-15 11:02:02.594698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.612 [2024-05-15 11:02:02.594714] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.612 [2024-05-15 11:02:02.598345] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.612 [2024-05-15 11:02:02.607534] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.612 [2024-05-15 11:02:02.608043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.612 [2024-05-15 11:02:02.608075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.612 [2024-05-15 11:02:02.608094] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.612 [2024-05-15 11:02:02.608337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.612 [2024-05-15 11:02:02.608582] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.612 [2024-05-15 11:02:02.608608] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.612 [2024-05-15 11:02:02.608624] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.612 [2024-05-15 11:02:02.612256] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.612 [2024-05-15 11:02:02.621446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.612 [2024-05-15 11:02:02.621918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.612 [2024-05-15 11:02:02.621958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.612 [2024-05-15 11:02:02.621977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.612 [2024-05-15 11:02:02.622220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.612 [2024-05-15 11:02:02.622466] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.612 [2024-05-15 11:02:02.622497] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.612 [2024-05-15 11:02:02.622514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.612 [2024-05-15 11:02:02.626144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.612 [2024-05-15 11:02:02.635360] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.635886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.635919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.635948] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.636191] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.636438] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.636464] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.636481] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.640112] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.649303] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.649807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.649839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.649857] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.650112] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.650358] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.650384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.650400] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.654028] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.663217] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.663692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.663725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.663743] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.663996] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.664242] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.664268] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.664284] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.667906] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.677109] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.677668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.677695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.677710] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.677964] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.678223] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.678249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.678265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.681885] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.691091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.691647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.691675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.691690] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.691967] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.692224] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.692249] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.692265] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.695886] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.705111] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.705597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.705630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.705649] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.705891] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.706146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.706168] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.706182] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.709718] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.719101] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.719614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.719642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.719658] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.719940] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.720167] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.720189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.720202] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.723805] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.732912] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.733483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.733523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.733541] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.733759] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.733987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.734011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.734025] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.737287] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.746709] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.747199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.747249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.747268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.747511] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.613 [2024-05-15 11:02:02.747763] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.613 [2024-05-15 11:02:02.747790] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.613 [2024-05-15 11:02:02.747807] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.613 [2024-05-15 11:02:02.751420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.613 [2024-05-15 11:02:02.760553] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.613 [2024-05-15 11:02:02.761058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.613 [2024-05-15 11:02:02.761091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.613 [2024-05-15 11:02:02.761110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.613 [2024-05-15 11:02:02.761353] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.614 [2024-05-15 11:02:02.761599] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.614 [2024-05-15 11:02:02.761625] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.614 [2024-05-15 11:02:02.761647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.614 [2024-05-15 11:02:02.765320] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.614 [2024-05-15 11:02:02.774430] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.614 [2024-05-15 11:02:02.774907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.614 [2024-05-15 11:02:02.774949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.614 [2024-05-15 11:02:02.774985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.614 [2024-05-15 11:02:02.775205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.614 [2024-05-15 11:02:02.775464] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.614 [2024-05-15 11:02:02.775490] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.614 [2024-05-15 11:02:02.775507] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.614 [2024-05-15 11:02:02.779166] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.614 [2024-05-15 11:02:02.788446] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.614 [2024-05-15 11:02:02.788949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.614 [2024-05-15 11:02:02.788995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.614 [2024-05-15 11:02:02.789012] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.614 [2024-05-15 11:02:02.789249] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.614 [2024-05-15 11:02:02.789495] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.614 [2024-05-15 11:02:02.789520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.614 [2024-05-15 11:02:02.789537] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.614 [2024-05-15 11:02:02.793115] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.614 [2024-05-15 11:02:02.802090] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.614 [2024-05-15 11:02:02.802507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.614 [2024-05-15 11:02:02.802536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.614 [2024-05-15 11:02:02.802552] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.614 [2024-05-15 11:02:02.802793] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.614 [2024-05-15 11:02:02.803017] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.614 [2024-05-15 11:02:02.803040] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.614 [2024-05-15 11:02:02.803054] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.614 [2024-05-15 11:02:02.806272] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.614 [2024-05-15 11:02:02.816170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.614 [2024-05-15 11:02:02.816622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.614 [2024-05-15 11:02:02.816659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.614 [2024-05-15 11:02:02.816679] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.614 [2024-05-15 11:02:02.816921] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.614 [2024-05-15 11:02:02.817162] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.614 [2024-05-15 11:02:02.817184] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.614 [2024-05-15 11:02:02.817199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.614 [2024-05-15 11:02:02.820820] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.614 [2024-05-15 11:02:02.830322] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.614 [2024-05-15 11:02:02.830824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.614 [2024-05-15 11:02:02.830857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.614 [2024-05-15 11:02:02.830875] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.614 [2024-05-15 11:02:02.831133] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.614 [2024-05-15 11:02:02.831391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.614 [2024-05-15 11:02:02.831417] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.614 [2024-05-15 11:02:02.831434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.614 [2024-05-15 11:02:02.835030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.614 [2024-05-15 11:02:02.844297] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.873 [2024-05-15 11:02:02.844750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.873 [2024-05-15 11:02:02.844781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.873 [2024-05-15 11:02:02.844814] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.873 [2024-05-15 11:02:02.845083] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.873 [2024-05-15 11:02:02.845337] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.873 [2024-05-15 11:02:02.845363] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.873 [2024-05-15 11:02:02.845380] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.873 [2024-05-15 11:02:02.849078] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.873 [2024-05-15 11:02:02.858388] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.873 [2024-05-15 11:02:02.859047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.873 [2024-05-15 11:02:02.859080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.873 [2024-05-15 11:02:02.859099] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.873 [2024-05-15 11:02:02.859342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.873 [2024-05-15 11:02:02.859594] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.873 [2024-05-15 11:02:02.859619] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.873 [2024-05-15 11:02:02.859635] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.873 [2024-05-15 11:02:02.863266] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.873 [2024-05-15 11:02:02.872477] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.873 [2024-05-15 11:02:02.873000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.873 [2024-05-15 11:02:02.873030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.873 [2024-05-15 11:02:02.873046] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.873 [2024-05-15 11:02:02.873302] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.873 [2024-05-15 11:02:02.873549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.873 [2024-05-15 11:02:02.873575] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.873 [2024-05-15 11:02:02.873590] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.873 [2024-05-15 11:02:02.877223] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.873 [2024-05-15 11:02:02.886420] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.873 [2024-05-15 11:02:02.886924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.873 [2024-05-15 11:02:02.886962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.873 [2024-05-15 11:02:02.886979] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.873 [2024-05-15 11:02:02.887235] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.873 [2024-05-15 11:02:02.887482] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.873 [2024-05-15 11:02:02.887507] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.873 [2024-05-15 11:02:02.887523] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.873 [2024-05-15 11:02:02.891153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.873 [2024-05-15 11:02:02.900349] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.873 [2024-05-15 11:02:02.900854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.873 [2024-05-15 11:02:02.900886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.873 [2024-05-15 11:02:02.900904] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.873 [2024-05-15 11:02:02.901159] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.873 [2024-05-15 11:02:02.901406] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.873 [2024-05-15 11:02:02.901432] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.873 [2024-05-15 11:02:02.901448] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.873 [2024-05-15 11:02:02.905085] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.873 [2024-05-15 11:02:02.914276] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.873 [2024-05-15 11:02:02.914791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.873 [2024-05-15 11:02:02.914823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.873 [2024-05-15 11:02:02.914842] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.873 [2024-05-15 11:02:02.915097] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.873 [2024-05-15 11:02:02.915343] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.873 [2024-05-15 11:02:02.915369] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.873 [2024-05-15 11:02:02.915385] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.873 [2024-05-15 11:02:02.919014] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.873 [2024-05-15 11:02:02.928203] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.873 [2024-05-15 11:02:02.928701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.873 [2024-05-15 11:02:02.928729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.873 [2024-05-15 11:02:02.928745] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.873 [2024-05-15 11:02:02.929015] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.873 [2024-05-15 11:02:02.929261] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.873 [2024-05-15 11:02:02.929287] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.873 [2024-05-15 11:02:02.929303] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.873 [2024-05-15 11:02:02.932924] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.873 [2024-05-15 11:02:02.942124] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.873 [2024-05-15 11:02:02.942633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.873 [2024-05-15 11:02:02.942663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.873 [2024-05-15 11:02:02.942680] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.873 [2024-05-15 11:02:02.942945] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.873 [2024-05-15 11:02:02.943193] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:02.943219] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:02.943236] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:02.946856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:02.956057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:02.956549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:02.956581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:02.956605] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:02.956876] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:02.957134] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:02.957161] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:02.957177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:02.960799] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:02.970007] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:02.970482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:02.970515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:02.970534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:02.970804] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:02.971071] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:02.971098] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:02.971115] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:02.974737] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:02.983945] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:02.984454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:02.984483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:02.984499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:02.984745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:02.985007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:02.985034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:02.985051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:02.988670] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:02.997926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:02.998436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:02.998466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:02.998483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:02.998740] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:02.999000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:02.999034] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:02.999051] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:03.002728] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:03.011951] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:03.012448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:03.012481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:03.012499] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:03.012741] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:03.012997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:03.013024] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:03.013040] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:03.016658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:03.025851] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:03.026334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:03.026366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:03.026384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:03.026626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:03.026872] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:03.026898] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:03.026915] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:03.030549] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:03.039746] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:03.040268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:03.040298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:03.040315] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:03.040569] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:03.040815] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:03.040840] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:03.040856] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:03.044487] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:03.053689] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:03.054211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:03.054244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:03.054263] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:03.054504] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:03.054751] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:03.054776] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:03.054792] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:03.058423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:03.067614] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:03.068127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:03.068160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:03.068178] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:03.068420] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:03.068666] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:03.068693] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:03.068709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:03.072347] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:03.081540] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.874 [2024-05-15 11:02:03.082112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.874 [2024-05-15 11:02:03.082144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.874 [2024-05-15 11:02:03.082162] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.874 [2024-05-15 11:02:03.082405] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.874 [2024-05-15 11:02:03.082650] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.874 [2024-05-15 11:02:03.082676] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.874 [2024-05-15 11:02:03.082693] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.874 [2024-05-15 11:02:03.086331] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.874 [2024-05-15 11:02:03.095546] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.875 [2024-05-15 11:02:03.096047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.875 [2024-05-15 11:02:03.096078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:46.875 [2024-05-15 11:02:03.096100] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:46.875 [2024-05-15 11:02:03.096355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:46.875 [2024-05-15 11:02:03.096602] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.875 [2024-05-15 11:02:03.096631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.875 [2024-05-15 11:02:03.096647] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.875 [2024-05-15 11:02:03.100283] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.109573] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.110055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.133 [2024-05-15 11:02:03.110088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.133 [2024-05-15 11:02:03.110111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.133 [2024-05-15 11:02:03.110358] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.133 [2024-05-15 11:02:03.110606] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.133 [2024-05-15 11:02:03.110631] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.133 [2024-05-15 11:02:03.110646] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.133 [2024-05-15 11:02:03.114276] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.123475] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.123993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.133 [2024-05-15 11:02:03.124026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.133 [2024-05-15 11:02:03.124044] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.133 [2024-05-15 11:02:03.124288] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.133 [2024-05-15 11:02:03.124534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.133 [2024-05-15 11:02:03.124559] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.133 [2024-05-15 11:02:03.124575] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.133 [2024-05-15 11:02:03.128206] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.137400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.137882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.133 [2024-05-15 11:02:03.137914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.133 [2024-05-15 11:02:03.137940] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.133 [2024-05-15 11:02:03.138184] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.133 [2024-05-15 11:02:03.138431] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.133 [2024-05-15 11:02:03.138462] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.133 [2024-05-15 11:02:03.138479] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.133 [2024-05-15 11:02:03.142109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.151311] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.151785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.133 [2024-05-15 11:02:03.151817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.133 [2024-05-15 11:02:03.151835] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.133 [2024-05-15 11:02:03.152086] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.133 [2024-05-15 11:02:03.152334] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.133 [2024-05-15 11:02:03.152358] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.133 [2024-05-15 11:02:03.152375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.133 [2024-05-15 11:02:03.156001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.165210] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.165758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.133 [2024-05-15 11:02:03.165790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.133 [2024-05-15 11:02:03.165808] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.133 [2024-05-15 11:02:03.166060] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.133 [2024-05-15 11:02:03.166307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.133 [2024-05-15 11:02:03.166332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.133 [2024-05-15 11:02:03.166348] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.133 [2024-05-15 11:02:03.169985] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.179190] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.179734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.133 [2024-05-15 11:02:03.179763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.133 [2024-05-15 11:02:03.179794] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.133 [2024-05-15 11:02:03.180059] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.133 [2024-05-15 11:02:03.180307] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.133 [2024-05-15 11:02:03.180332] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.133 [2024-05-15 11:02:03.180349] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.133 [2024-05-15 11:02:03.183990] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.193196] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.193724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.133 [2024-05-15 11:02:03.193757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.133 [2024-05-15 11:02:03.193775] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.133 [2024-05-15 11:02:03.194029] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.133 [2024-05-15 11:02:03.194284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.133 [2024-05-15 11:02:03.194309] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.133 [2024-05-15 11:02:03.194325] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.133 [2024-05-15 11:02:03.197957] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.207166] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.207676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.133 [2024-05-15 11:02:03.207705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.133 [2024-05-15 11:02:03.207721] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.133 [2024-05-15 11:02:03.207992] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.133 [2024-05-15 11:02:03.208239] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.133 [2024-05-15 11:02:03.208275] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.133 [2024-05-15 11:02:03.208290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.133 [2024-05-15 11:02:03.211904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.133 [2024-05-15 11:02:03.220756] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.133 [2024-05-15 11:02:03.221229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.221258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.221275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.221534] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.221758] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.221780] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.221795] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.224993] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.234263] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.234777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.234806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.234822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.235084] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.235330] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.235351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.235364] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.238521] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.247791] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.248268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.248311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.248328] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.248576] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.248772] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.248793] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.248806] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.252093] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.261274] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.261814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.261843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.261859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.262115] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.262357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.262378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.262392] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.265744] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.274640] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.275102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.275132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.275149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.275399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.275597] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.275618] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.275636] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.278709] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.287924] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.288340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.288369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.288385] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.288635] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.288831] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.288853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.288866] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.291902] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.301182] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.301708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.301736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.301752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.302030] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.302255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.302277] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.302290] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.305371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.314527] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.315048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.315077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.315092] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.315338] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.315534] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.315555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.315569] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.318570] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.327737] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.328181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.328214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.328231] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.328478] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.328675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.328696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.328709] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.331731] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.341045] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.341549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.341577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.341593] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.341842] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.342089] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.134 [2024-05-15 11:02:03.342116] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.134 [2024-05-15 11:02:03.342130] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.134 [2024-05-15 11:02:03.345136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.134 [2024-05-15 11:02:03.354375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.134 [2024-05-15 11:02:03.355000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.134 [2024-05-15 11:02:03.355056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.134 [2024-05-15 11:02:03.355075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.134 [2024-05-15 11:02:03.355326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.134 [2024-05-15 11:02:03.355524] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.135 [2024-05-15 11:02:03.355545] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.135 [2024-05-15 11:02:03.355558] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.135 [2024-05-15 11:02:03.358601] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.393 [2024-05-15 11:02:03.367879] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.393 [2024-05-15 11:02:03.368364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.393 [2024-05-15 11:02:03.368396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.393 [2024-05-15 11:02:03.368413] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.393 [2024-05-15 11:02:03.368651] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.393 [2024-05-15 11:02:03.368883] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.393 [2024-05-15 11:02:03.368920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.393 [2024-05-15 11:02:03.368948] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.393 [2024-05-15 11:02:03.372187] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.393 [2024-05-15 11:02:03.381206] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.393 [2024-05-15 11:02:03.381679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.381710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.381726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.381991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.382220] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.382242] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.382257] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.385310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.394606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.395057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.395087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.395104] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.395352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.395548] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.395569] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.395582] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.398617] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.407866] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.408346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.408376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.408392] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.408641] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.408837] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.408859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.408871] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.411963] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.421119] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.421587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.421616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.421633] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.421881] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.422112] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.422135] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.422149] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.425162] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.434372] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.434785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.434813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.434829] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.435062] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.435300] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.435322] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.435335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.438328] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.447707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.448138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.448166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.448182] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.448435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.448631] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.448652] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.448665] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.451669] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.461076] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.461554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.461583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.461604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.461855] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.462098] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.462121] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.462134] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.465152] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.474480] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.474948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.474978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.474995] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.475263] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.475459] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.475480] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.475494] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.478516] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.487729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.488440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.488478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.488495] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.488709] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.488906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.488926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.488966] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.492046] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.501050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.501829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.501881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.501898] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.502154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.394 [2024-05-15 11:02:03.502393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.394 [2024-05-15 11:02:03.502423] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.394 [2024-05-15 11:02:03.502439] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.394 [2024-05-15 11:02:03.505585] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.394 [2024-05-15 11:02:03.514610] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.394 [2024-05-15 11:02:03.515154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.394 [2024-05-15 11:02:03.515185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.394 [2024-05-15 11:02:03.515202] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.394 [2024-05-15 11:02:03.515462] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.515705] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.515728] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.515743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.395 [2024-05-15 11:02:03.518877] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.395 [2024-05-15 11:02:03.527919] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.395 [2024-05-15 11:02:03.528469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.395 [2024-05-15 11:02:03.528499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.395 [2024-05-15 11:02:03.528530] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.395 [2024-05-15 11:02:03.528777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.529000] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.529020] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.529034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.395 [2024-05-15 11:02:03.532052] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.395 [2024-05-15 11:02:03.541294] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.395 [2024-05-15 11:02:03.541744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.395 [2024-05-15 11:02:03.541773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.395 [2024-05-15 11:02:03.541790] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.395 [2024-05-15 11:02:03.542068] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.542293] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.542314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.542328] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.395 [2024-05-15 11:02:03.545319] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.395 [2024-05-15 11:02:03.554749] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.395 [2024-05-15 11:02:03.555235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.395 [2024-05-15 11:02:03.555264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.395 [2024-05-15 11:02:03.555296] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.395 [2024-05-15 11:02:03.555528] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.555724] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.555744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.555757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.395 [2024-05-15 11:02:03.558780] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.395 [2024-05-15 11:02:03.567995] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.395 [2024-05-15 11:02:03.568489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.395 [2024-05-15 11:02:03.568519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.395 [2024-05-15 11:02:03.568535] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.395 [2024-05-15 11:02:03.568785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.569023] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.569047] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.569061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.395 [2024-05-15 11:02:03.572108] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.395 [2024-05-15 11:02:03.581334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.395 [2024-05-15 11:02:03.581819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.395 [2024-05-15 11:02:03.581847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.395 [2024-05-15 11:02:03.581863] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.395 [2024-05-15 11:02:03.582130] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.582362] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.582383] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.582396] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.395 [2024-05-15 11:02:03.585390] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.395 [2024-05-15 11:02:03.594623] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.395 [2024-05-15 11:02:03.595104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.395 [2024-05-15 11:02:03.595134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.395 [2024-05-15 11:02:03.595150] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.395 [2024-05-15 11:02:03.595407] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.595603] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.595623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.595637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.395 [2024-05-15 11:02:03.598671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.395 [2024-05-15 11:02:03.607863] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.395 [2024-05-15 11:02:03.608347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.395 [2024-05-15 11:02:03.608377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.395 [2024-05-15 11:02:03.608393] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.395 [2024-05-15 11:02:03.608642] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.608838] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.608859] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.608873] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.395 [2024-05-15 11:02:03.611933] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.395 [2024-05-15 11:02:03.621183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.395 [2024-05-15 11:02:03.621671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.395 [2024-05-15 11:02:03.621700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.395 [2024-05-15 11:02:03.621717] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.395 [2024-05-15 11:02:03.621979] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.395 [2024-05-15 11:02:03.622227] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.395 [2024-05-15 11:02:03.622250] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.395 [2024-05-15 11:02:03.622264] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.654 [2024-05-15 11:02:03.625696] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.654 [2024-05-15 11:02:03.634485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.654 [2024-05-15 11:02:03.634953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.654 [2024-05-15 11:02:03.634983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.654 [2024-05-15 11:02:03.635000] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.654 [2024-05-15 11:02:03.635253] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.654 [2024-05-15 11:02:03.635465] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.654 [2024-05-15 11:02:03.635486] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.654 [2024-05-15 11:02:03.635503] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.654 [2024-05-15 11:02:03.638523] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.654 [2024-05-15 11:02:03.647742] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.654 [2024-05-15 11:02:03.648371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.654 [2024-05-15 11:02:03.648427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.654 [2024-05-15 11:02:03.648445] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.654 [2024-05-15 11:02:03.648677] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.654 [2024-05-15 11:02:03.648874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.654 [2024-05-15 11:02:03.648895] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.654 [2024-05-15 11:02:03.648909] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.654 [2024-05-15 11:02:03.651972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.654 [2024-05-15 11:02:03.661088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.654 [2024-05-15 11:02:03.661536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.654 [2024-05-15 11:02:03.661567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.654 [2024-05-15 11:02:03.661584] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.661819] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.662062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.662086] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.662100] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.665118] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.655 [2024-05-15 11:02:03.674442] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.655 [2024-05-15 11:02:03.674892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.655 [2024-05-15 11:02:03.674942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.655 [2024-05-15 11:02:03.674971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.675206] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.675419] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.675440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.675453] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.678488] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.655 [2024-05-15 11:02:03.687696] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.655 [2024-05-15 11:02:03.688209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.655 [2024-05-15 11:02:03.688240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.655 [2024-05-15 11:02:03.688257] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.688501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.688697] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.688717] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.688731] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.691781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.655 [2024-05-15 11:02:03.701011] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.655 [2024-05-15 11:02:03.701509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.655 [2024-05-15 11:02:03.701539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.655 [2024-05-15 11:02:03.701555] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.701808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.702052] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.702076] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.702090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.705128] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.655 [2024-05-15 11:02:03.714334] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.655 [2024-05-15 11:02:03.714814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.655 [2024-05-15 11:02:03.714843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.655 [2024-05-15 11:02:03.714859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.715127] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.715345] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.715366] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.715379] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.718371] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.655 [2024-05-15 11:02:03.727583] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.655 [2024-05-15 11:02:03.728100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.655 [2024-05-15 11:02:03.728129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.655 [2024-05-15 11:02:03.728145] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.728399] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.728595] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.728615] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.728629] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.731666] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.655 [2024-05-15 11:02:03.740795] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.655 [2024-05-15 11:02:03.741246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.655 [2024-05-15 11:02:03.741275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.655 [2024-05-15 11:02:03.741291] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.741540] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.741737] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.741757] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.741771] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.744791] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.655 [2024-05-15 11:02:03.754115] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.655 [2024-05-15 11:02:03.754617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.655 [2024-05-15 11:02:03.754646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.655 [2024-05-15 11:02:03.754662] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.754896] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.755142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.755163] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.755177] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.758354] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.655 [2024-05-15 11:02:03.767629] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.655 [2024-05-15 11:02:03.768252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.655 [2024-05-15 11:02:03.768295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.655 [2024-05-15 11:02:03.768313] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.655 [2024-05-15 11:02:03.768568] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.655 [2024-05-15 11:02:03.768794] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.655 [2024-05-15 11:02:03.768815] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.655 [2024-05-15 11:02:03.768833] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.655 [2024-05-15 11:02:03.771978] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.656 [2024-05-15 11:02:03.781075] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.656 [2024-05-15 11:02:03.781589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.656 [2024-05-15 11:02:03.781620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.656 [2024-05-15 11:02:03.781637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.656 [2024-05-15 11:02:03.781885] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.656 [2024-05-15 11:02:03.782108] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.656 [2024-05-15 11:02:03.782146] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.656 [2024-05-15 11:02:03.782160] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.656 [2024-05-15 11:02:03.785290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.656 [2024-05-15 11:02:03.794556] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.656 [2024-05-15 11:02:03.795041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.656 [2024-05-15 11:02:03.795070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.656 [2024-05-15 11:02:03.795086] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.656 [2024-05-15 11:02:03.795337] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.656 [2024-05-15 11:02:03.795533] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.656 [2024-05-15 11:02:03.795553] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.656 [2024-05-15 11:02:03.795567] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.656 [2024-05-15 11:02:03.798627] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.656 [2024-05-15 11:02:03.807819] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.656 [2024-05-15 11:02:03.808385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.656 [2024-05-15 11:02:03.808427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.656 [2024-05-15 11:02:03.808463] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.656 [2024-05-15 11:02:03.808702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.656 [2024-05-15 11:02:03.808899] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.656 [2024-05-15 11:02:03.808920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.656 [2024-05-15 11:02:03.808958] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.656 [2024-05-15 11:02:03.811987] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.656 [2024-05-15 11:02:03.821025] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.656 [2024-05-15 11:02:03.821510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.656 [2024-05-15 11:02:03.821546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.656 [2024-05-15 11:02:03.821564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.656 [2024-05-15 11:02:03.821816] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.656 [2024-05-15 11:02:03.822043] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.656 [2024-05-15 11:02:03.822065] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.656 [2024-05-15 11:02:03.822078] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.656 [2024-05-15 11:02:03.825155] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.656 [2024-05-15 11:02:03.834569] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.656 [2024-05-15 11:02:03.835064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.656 [2024-05-15 11:02:03.835094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.656 [2024-05-15 11:02:03.835111] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.656 [2024-05-15 11:02:03.835342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.656 [2024-05-15 11:02:03.835538] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.656 [2024-05-15 11:02:03.835558] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.656 [2024-05-15 11:02:03.835571] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.656 [2024-05-15 11:02:03.838785] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.656 [2024-05-15 11:02:03.848005] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.656 [2024-05-15 11:02:03.848517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.656 [2024-05-15 11:02:03.848546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.656 [2024-05-15 11:02:03.848562] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.656 [2024-05-15 11:02:03.848795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.656 [2024-05-15 11:02:03.849038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.656 [2024-05-15 11:02:03.849061] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.656 [2024-05-15 11:02:03.849076] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.656 [2024-05-15 11:02:03.852211] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.656 [2024-05-15 11:02:03.861467] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.656 [2024-05-15 11:02:03.861985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.656 [2024-05-15 11:02:03.862014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.656 [2024-05-15 11:02:03.862031] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.656 [2024-05-15 11:02:03.862300] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.656 [2024-05-15 11:02:03.862503] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.656 [2024-05-15 11:02:03.862523] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.656 [2024-05-15 11:02:03.862536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.656 [2024-05-15 11:02:03.865660] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.656 [2024-05-15 11:02:03.874913] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.656 [2024-05-15 11:02:03.875383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.656 [2024-05-15 11:02:03.875412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.656 [2024-05-15 11:02:03.875428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.656 [2024-05-15 11:02:03.875680] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.656 [2024-05-15 11:02:03.875877] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.656 [2024-05-15 11:02:03.875897] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.656 [2024-05-15 11:02:03.875924] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.656 [2024-05-15 11:02:03.878876] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.916 [2024-05-15 11:02:03.888400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.916 [2024-05-15 11:02:03.888917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.916 [2024-05-15 11:02:03.888954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.916 [2024-05-15 11:02:03.888971] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.916 [2024-05-15 11:02:03.889211] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.916 [2024-05-15 11:02:03.889424] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.916 [2024-05-15 11:02:03.889445] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.916 [2024-05-15 11:02:03.889458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.916 [2024-05-15 11:02:03.892784] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.916 [2024-05-15 11:02:03.901707] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.916 [2024-05-15 11:02:03.902187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.916 [2024-05-15 11:02:03.902227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.916 [2024-05-15 11:02:03.902244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.916 [2024-05-15 11:02:03.902482] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.916 [2024-05-15 11:02:03.902678] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.916 [2024-05-15 11:02:03.902698] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.916 [2024-05-15 11:02:03.902712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.916 [2024-05-15 11:02:03.905712] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.916 [2024-05-15 11:02:03.915048] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.916 [2024-05-15 11:02:03.915515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.916 [2024-05-15 11:02:03.915543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.916 [2024-05-15 11:02:03.915559] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.916 [2024-05-15 11:02:03.915808] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.916 [2024-05-15 11:02:03.916034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.916 [2024-05-15 11:02:03.916056] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.916 [2024-05-15 11:02:03.916071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.916 [2024-05-15 11:02:03.919097] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.916 [2024-05-15 11:02:03.928378] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.916 [2024-05-15 11:02:03.928830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.916 [2024-05-15 11:02:03.928858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.916 [2024-05-15 11:02:03.928874] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.916 [2024-05-15 11:02:03.929139] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.916 [2024-05-15 11:02:03.929354] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.916 [2024-05-15 11:02:03.929374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.916 [2024-05-15 11:02:03.929387] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.916 [2024-05-15 11:02:03.932421] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.916 [2024-05-15 11:02:03.941647] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.916 [2024-05-15 11:02:03.942164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.916 [2024-05-15 11:02:03.942193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.916 [2024-05-15 11:02:03.942210] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.916 [2024-05-15 11:02:03.942461] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.916 [2024-05-15 11:02:03.942658] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:03.942678] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:03.942691] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:03.945717] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:03.954961] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:03.955380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:03.955408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:03.955429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:03.955685] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:03.955882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:03.955903] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:03.955937] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:03.958967] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:03.968852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:03.969410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:03.969443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:03.969462] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:03.969705] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:03.969976] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:03.969999] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:03.970013] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:03.973575] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:03.982905] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:03.983576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:03.983604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:03.983634] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:03.983887] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:03.984146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:03.984173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:03.984189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:03.987811] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:03.996821] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:03.997333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:03.997365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:03.997384] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:03.997627] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:03.997874] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:03.997905] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:03.997922] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:04.001563] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:04.010760] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:04.011290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:04.011322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:04.011341] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:04.011583] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:04.011829] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:04.011853] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:04.011868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:04.015551] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:04.024875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:04.025399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:04.025432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:04.025450] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:04.025692] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:04.025948] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:04.025975] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:04.025992] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:04.029615] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:04.038809] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:04.039455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:04.039516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:04.039534] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:04.039776] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:04.040033] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:04.040061] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:04.040077] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:04.043696] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:04.052899] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:04.053450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:04.053483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:04.053501] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:04.053744] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:04.054004] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.917 [2024-05-15 11:02:04.054030] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.917 [2024-05-15 11:02:04.054047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.917 [2024-05-15 11:02:04.057668] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.917 [2024-05-15 11:02:04.066857] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.917 [2024-05-15 11:02:04.067372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.917 [2024-05-15 11:02:04.067400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.917 [2024-05-15 11:02:04.067415] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.917 [2024-05-15 11:02:04.067666] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.917 [2024-05-15 11:02:04.067914] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.918 [2024-05-15 11:02:04.067954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.918 [2024-05-15 11:02:04.067972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.918 [2024-05-15 11:02:04.071597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.918 [2024-05-15 11:02:04.080789] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.918 [2024-05-15 11:02:04.081288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.918 [2024-05-15 11:02:04.081320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.918 [2024-05-15 11:02:04.081338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.918 [2024-05-15 11:02:04.081580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.918 [2024-05-15 11:02:04.081826] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.918 [2024-05-15 11:02:04.081852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.918 [2024-05-15 11:02:04.081868] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.918 [2024-05-15 11:02:04.085503] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.918 [2024-05-15 11:02:04.094692] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.918 [2024-05-15 11:02:04.095182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.918 [2024-05-15 11:02:04.095215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.918 [2024-05-15 11:02:04.095235] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.918 [2024-05-15 11:02:04.095484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.918 [2024-05-15 11:02:04.095730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.918 [2024-05-15 11:02:04.095756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.918 [2024-05-15 11:02:04.095772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.918 [2024-05-15 11:02:04.099405] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.918 [2024-05-15 11:02:04.108598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.918 [2024-05-15 11:02:04.109072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.918 [2024-05-15 11:02:04.109106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.918 [2024-05-15 11:02:04.109124] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.918 [2024-05-15 11:02:04.109367] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.918 [2024-05-15 11:02:04.109613] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.918 [2024-05-15 11:02:04.109638] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.918 [2024-05-15 11:02:04.109655] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.918 [2024-05-15 11:02:04.113290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.918 [2024-05-15 11:02:04.122690] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.918 [2024-05-15 11:02:04.123194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.918 [2024-05-15 11:02:04.123227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.918 [2024-05-15 11:02:04.123245] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.918 [2024-05-15 11:02:04.123487] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.918 [2024-05-15 11:02:04.123733] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.918 [2024-05-15 11:02:04.123759] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.918 [2024-05-15 11:02:04.123775] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.918 [2024-05-15 11:02:04.127411] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:47.918 [2024-05-15 11:02:04.136606] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:47.918 [2024-05-15 11:02:04.137106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:47.918 [2024-05-15 11:02:04.137138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:47.918 [2024-05-15 11:02:04.137156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:47.918 [2024-05-15 11:02:04.137398] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:47.918 [2024-05-15 11:02:04.137644] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:47.918 [2024-05-15 11:02:04.137670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:47.918 [2024-05-15 11:02:04.137692] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:47.918 [2024-05-15 11:02:04.141323] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.178 [2024-05-15 11:02:04.150598] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.178 [2024-05-15 11:02:04.151100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.178 [2024-05-15 11:02:04.151133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.178 [2024-05-15 11:02:04.151151] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.178 [2024-05-15 11:02:04.151394] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.178 [2024-05-15 11:02:04.151640] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.178 [2024-05-15 11:02:04.151665] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.178 [2024-05-15 11:02:04.151681] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.178 [2024-05-15 11:02:04.155342] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.178 [2024-05-15 11:02:04.164536] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.178 [2024-05-15 11:02:04.165088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.178 [2024-05-15 11:02:04.165121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.178 [2024-05-15 11:02:04.165139] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.178 [2024-05-15 11:02:04.165381] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.178 [2024-05-15 11:02:04.165627] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.178 [2024-05-15 11:02:04.165653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.178 [2024-05-15 11:02:04.165668] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.178 [2024-05-15 11:02:04.169304] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.178 [2024-05-15 11:02:04.178502] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.178 [2024-05-15 11:02:04.179001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.178 [2024-05-15 11:02:04.179034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.178 [2024-05-15 11:02:04.179052] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.178 [2024-05-15 11:02:04.179295] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.178 [2024-05-15 11:02:04.179540] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.178 [2024-05-15 11:02:04.179566] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.178 [2024-05-15 11:02:04.179583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.178 [2024-05-15 11:02:04.183211] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.178 [2024-05-15 11:02:04.192400] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.178 [2024-05-15 11:02:04.192914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.178 [2024-05-15 11:02:04.192956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.178 [2024-05-15 11:02:04.192977] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.193219] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.193467] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.193492] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.193509] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.197138] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.206345] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.206839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.206871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.206889] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.207145] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.207391] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.207418] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.207434] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.211060] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.220250] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.220976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.221008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.221026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.221269] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.221514] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.221540] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.221556] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.225188] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.234167] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.234675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.234708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.234726] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.234984] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.235229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.235254] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.235270] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.238889] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.248094] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.248597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.248630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.248648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.248890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.249145] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.249171] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.249186] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.252807] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.262024] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.262601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.262633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.262652] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.262894] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.263163] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.263189] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.263205] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.266895] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.276004] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.276721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.276774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.276793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.277046] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.277294] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.277319] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.277335] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.280988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.289989] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.290482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.290514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.290532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.290774] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.291237] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.291264] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.291281] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.294904] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.303927] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.304502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.304535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.304553] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.304795] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.305053] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.305079] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.305095] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.308722] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.317937] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.318434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.318466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.318485] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.318726] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.318985] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.179 [2024-05-15 11:02:04.319011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.179 [2024-05-15 11:02:04.319027] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.179 [2024-05-15 11:02:04.322652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.179 [2024-05-15 11:02:04.331865] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.179 [2024-05-15 11:02:04.332372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.179 [2024-05-15 11:02:04.332410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.179 [2024-05-15 11:02:04.332429] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.179 [2024-05-15 11:02:04.332671] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.179 [2024-05-15 11:02:04.332917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.180 [2024-05-15 11:02:04.332955] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.180 [2024-05-15 11:02:04.332984] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.180 [2024-05-15 11:02:04.336611] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.180 [2024-05-15 11:02:04.345818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.180 [2024-05-15 11:02:04.346325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.180 [2024-05-15 11:02:04.346357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.180 [2024-05-15 11:02:04.346375] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.180 [2024-05-15 11:02:04.346618] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.180 [2024-05-15 11:02:04.346865] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.180 [2024-05-15 11:02:04.346890] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.180 [2024-05-15 11:02:04.346906] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.180 [2024-05-15 11:02:04.350542] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.180 [2024-05-15 11:02:04.359751] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.180 [2024-05-15 11:02:04.360276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.180 [2024-05-15 11:02:04.360309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.180 [2024-05-15 11:02:04.360327] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.180 [2024-05-15 11:02:04.360570] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.180 [2024-05-15 11:02:04.360817] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.180 [2024-05-15 11:02:04.360841] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.180 [2024-05-15 11:02:04.360857] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.180 [2024-05-15 11:02:04.364637] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.180 [2024-05-15 11:02:04.373860] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.180 [2024-05-15 11:02:04.374377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.180 [2024-05-15 11:02:04.374410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.180 [2024-05-15 11:02:04.374428] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.180 [2024-05-15 11:02:04.374670] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.180 [2024-05-15 11:02:04.374926] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.180 [2024-05-15 11:02:04.374964] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.180 [2024-05-15 11:02:04.374980] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.180 [2024-05-15 11:02:04.378605] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.180 [2024-05-15 11:02:04.387806] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.180 [2024-05-15 11:02:04.388435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.180 [2024-05-15 11:02:04.388468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.180 [2024-05-15 11:02:04.388487] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.180 [2024-05-15 11:02:04.388728] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.180 [2024-05-15 11:02:04.388987] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.180 [2024-05-15 11:02:04.389012] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.180 [2024-05-15 11:02:04.389028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.180 [2024-05-15 11:02:04.392649] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.180 [2024-05-15 11:02:04.401850] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.180 [2024-05-15 11:02:04.402578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.180 [2024-05-15 11:02:04.402630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.180 [2024-05-15 11:02:04.402648] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.180 [2024-05-15 11:02:04.402890] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.180 [2024-05-15 11:02:04.403146] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.180 [2024-05-15 11:02:04.403173] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.180 [2024-05-15 11:02:04.403189] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.180 [2024-05-15 11:02:04.406834] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.415922] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.416661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.416712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.416730] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.416983] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.417229] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.417255] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.417272] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.420896] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.429895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.430375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.430408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.430427] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.430669] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.430915] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.430954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.430972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.434595] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.443786] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.444293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.444326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.444344] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.444586] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.444832] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.444858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.444874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.448504] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.457702] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.458193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.458226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.458244] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.458485] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.458730] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.458756] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.458772] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.462399] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.471591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.472066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.472098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.472122] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.472365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.472610] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.472636] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.472652] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.476293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.485485] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.485986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.486018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.486037] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.486279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.486525] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.486550] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.486566] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.490198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.499391] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.499889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.499921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.499951] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.500195] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.500441] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.500466] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.500482] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.504109] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.513299] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.513901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.513942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.513963] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.514205] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.514451] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.514482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.514499] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.518177] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.440 [2024-05-15 11:02:04.527295] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.440 [2024-05-15 11:02:04.527801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.440 [2024-05-15 11:02:04.527834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.440 [2024-05-15 11:02:04.527853] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.440 [2024-05-15 11:02:04.528106] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.440 [2024-05-15 11:02:04.528352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.440 [2024-05-15 11:02:04.528378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.440 [2024-05-15 11:02:04.528395] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.440 [2024-05-15 11:02:04.532028] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.541223] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.541706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.541738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.541756] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.542009] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.542255] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.542281] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.542298] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.545915] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.555118] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.555617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.555649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.555667] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.555908] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.556168] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.556195] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.556211] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.559832] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.569028] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.569508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.569541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.569560] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.569803] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.570062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.570089] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.570106] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.573732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.582926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.583433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.583465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.583483] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.583725] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.583984] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.584011] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.584028] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.587647] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.596837] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.597358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.597391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.597409] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.597652] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.597898] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.597924] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.597953] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.601577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.610766] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.611263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.611295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.611314] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.611563] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.611810] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.611835] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.611851] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.615485] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.624690] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.625170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.625203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.625221] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.625463] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.625709] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.625735] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.625751] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.629383] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.638785] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.639292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.639324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.639342] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.639584] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.639830] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.639856] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.639872] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.643506] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.652697] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.653175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.653207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.653224] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.653467] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.653712] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.653737] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.653759] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.657396] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.441 [2024-05-15 11:02:04.666588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.441 [2024-05-15 11:02:04.667084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.441 [2024-05-15 11:02:04.667116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.441 [2024-05-15 11:02:04.667135] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.441 [2024-05-15 11:02:04.667382] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.441 [2024-05-15 11:02:04.667628] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.441 [2024-05-15 11:02:04.667653] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.441 [2024-05-15 11:02:04.667670] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.441 [2024-05-15 11:02:04.671343] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.701 [2024-05-15 11:02:04.680590] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.701 [2024-05-15 11:02:04.681103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.701 [2024-05-15 11:02:04.681137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.701 [2024-05-15 11:02:04.681155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.701 [2024-05-15 11:02:04.681397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.701 [2024-05-15 11:02:04.681643] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.701 [2024-05-15 11:02:04.681669] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.701 [2024-05-15 11:02:04.681685] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.701 [2024-05-15 11:02:04.685317] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.701 [2024-05-15 11:02:04.694508] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.701 [2024-05-15 11:02:04.695015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.701 [2024-05-15 11:02:04.695047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.701 [2024-05-15 11:02:04.695066] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.701 [2024-05-15 11:02:04.695308] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.701 [2024-05-15 11:02:04.695554] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.701 [2024-05-15 11:02:04.695580] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.701 [2024-05-15 11:02:04.695596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.701 [2024-05-15 11:02:04.699229] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.701 [2024-05-15 11:02:04.708421] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.701 [2024-05-15 11:02:04.708927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.701 [2024-05-15 11:02:04.708967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.701 [2024-05-15 11:02:04.708985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.701 [2024-05-15 11:02:04.709227] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.701 [2024-05-15 11:02:04.709473] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.701 [2024-05-15 11:02:04.709498] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.701 [2024-05-15 11:02:04.709515] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.701 [2024-05-15 11:02:04.713144] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.701 [2024-05-15 11:02:04.722339] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.701 [2024-05-15 11:02:04.722848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.701 [2024-05-15 11:02:04.722880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.701 [2024-05-15 11:02:04.722899] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.701 [2024-05-15 11:02:04.723154] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.701 [2024-05-15 11:02:04.723400] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.701 [2024-05-15 11:02:04.723426] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.701 [2024-05-15 11:02:04.723442] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.701 [2024-05-15 11:02:04.727072] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.701 [2024-05-15 11:02:04.736270] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.701 [2024-05-15 11:02:04.736768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.701 [2024-05-15 11:02:04.736801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.701 [2024-05-15 11:02:04.736819] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.701 [2024-05-15 11:02:04.737072] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.701 [2024-05-15 11:02:04.737319] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.701 [2024-05-15 11:02:04.737345] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.701 [2024-05-15 11:02:04.737361] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.701 [2024-05-15 11:02:04.740991] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.701 [2024-05-15 11:02:04.750200] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.701 [2024-05-15 11:02:04.750741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.701 [2024-05-15 11:02:04.750774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.701 [2024-05-15 11:02:04.750793] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.701 [2024-05-15 11:02:04.751050] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.701 [2024-05-15 11:02:04.751303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.701 [2024-05-15 11:02:04.751330] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.701 [2024-05-15 11:02:04.751346] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.701 [2024-05-15 11:02:04.754976] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.701 [2024-05-15 11:02:04.764170] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.701 [2024-05-15 11:02:04.764737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.701 [2024-05-15 11:02:04.764769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.701 [2024-05-15 11:02:04.764788] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.701 [2024-05-15 11:02:04.765043] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.701 [2024-05-15 11:02:04.765290] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.701 [2024-05-15 11:02:04.765314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.701 [2024-05-15 11:02:04.765330] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.701 [2024-05-15 11:02:04.768999] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.701 [2024-05-15 11:02:04.778314] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.701 [2024-05-15 11:02:04.778835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.701 [2024-05-15 11:02:04.778868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.701 [2024-05-15 11:02:04.778888] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.701 [2024-05-15 11:02:04.779146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.701 [2024-05-15 11:02:04.779393] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.701 [2024-05-15 11:02:04.779422] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.701 [2024-05-15 11:02:04.779440] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.783075] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.792266] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.792774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.792806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.792824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.793079] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.793325] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.793351] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.793367] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.797001] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.806191] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.806644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.806677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.806695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.806952] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.807199] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.807225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.807241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.810860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.820284] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.820793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.820826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.820845] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.821101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.821347] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.821374] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.821390] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.825030] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.834220] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.834702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.834734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.834752] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.835007] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.835254] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.835279] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.835296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.838917] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.848114] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.848601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.848633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.848657] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.848900] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.849158] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.849185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.849201] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.852819] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.862016] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.862514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.862546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.862565] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.862806] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.863064] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.863091] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.863107] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.866727] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.875926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.876439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.876471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.876490] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.876731] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.876991] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.877018] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.877034] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.880652] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.889842] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.890402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.890448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.890468] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.890718] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.890989] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.891016] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.891033] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.894658] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.903854] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.904348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.904384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.904404] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.904647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.904894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.904920] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.904949] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.908577] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.917778] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.918264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.702 [2024-05-15 11:02:04.918297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.702 [2024-05-15 11:02:04.918316] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.702 [2024-05-15 11:02:04.918559] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.702 [2024-05-15 11:02:04.918806] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.702 [2024-05-15 11:02:04.918831] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.702 [2024-05-15 11:02:04.918847] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.702 [2024-05-15 11:02:04.922482] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.702 [2024-05-15 11:02:04.931714] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.702 [2024-05-15 11:02:04.932241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.703 [2024-05-15 11:02:04.932277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.703 [2024-05-15 11:02:04.932298] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.703 [2024-05-15 11:02:04.932549] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.962 [2024-05-15 11:02:04.932795] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.962 [2024-05-15 11:02:04.932822] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:04.932838] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:04.936497] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:04.945729] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:04.946237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:04.946270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:04.946288] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:04.946530] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:04.946776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:04.946801] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:04.946818] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:04.950451] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:04.959643] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:04.960147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:04.960180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:04.960199] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:04.960442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:04.960688] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:04.960714] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:04.960729] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:04.964363] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:04.973571] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:04.974096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:04.974130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:04.974149] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:04.974392] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:04.974639] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:04.974666] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:04.974683] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:04.978313] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:04.987517] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:04.988027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:04.988059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:04.988084] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:04.988327] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:04.988574] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:04.988599] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:04.988615] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:04.992252] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:05.001444] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:05.001946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:05.001987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:05.002005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:05.002247] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:05.002494] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:05.002520] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:05.002536] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:05.006165] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:05.015358] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:05.015928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:05.015966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:05.015985] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:05.016234] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:05.016489] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:05.016515] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:05.016531] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:05.020238] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:05.029262] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:05.029733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:05.029765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:05.029784] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:05.030036] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:05.030284] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:05.030314] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:05.030332] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:05.033960] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:05.043151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:05.043651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:05.043683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:05.043701] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:05.043953] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:05.044200] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:05.044225] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:05.044241] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:05.047860] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:05.057057] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:05.057559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:05.057591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:05.057609] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:05.057851] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:05.058110] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:05.058136] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:05.058153] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.963 [2024-05-15 11:02:05.061771] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.963 [2024-05-15 11:02:05.070975] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.963 [2024-05-15 11:02:05.071452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.963 [2024-05-15 11:02:05.071484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.963 [2024-05-15 11:02:05.071502] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.963 [2024-05-15 11:02:05.071745] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.963 [2024-05-15 11:02:05.072005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.963 [2024-05-15 11:02:05.072031] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.963 [2024-05-15 11:02:05.072047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.075668] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.964 [2024-05-15 11:02:05.084875] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.964 [2024-05-15 11:02:05.085518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.964 [2024-05-15 11:02:05.085564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.964 [2024-05-15 11:02:05.085590] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.964 [2024-05-15 11:02:05.085839] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.964 [2024-05-15 11:02:05.086101] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.964 [2024-05-15 11:02:05.086127] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.964 [2024-05-15 11:02:05.086143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.089768] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.964 [2024-05-15 11:02:05.098965] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.964 [2024-05-15 11:02:05.099481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.964 [2024-05-15 11:02:05.099515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.964 [2024-05-15 11:02:05.099542] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.964 [2024-05-15 11:02:05.099785] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.964 [2024-05-15 11:02:05.100044] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.964 [2024-05-15 11:02:05.100070] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.964 [2024-05-15 11:02:05.100086] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.103707] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.964 [2024-05-15 11:02:05.112898] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.964 [2024-05-15 11:02:05.113463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.964 [2024-05-15 11:02:05.113496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.964 [2024-05-15 11:02:05.113524] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.964 [2024-05-15 11:02:05.113767] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.964 [2024-05-15 11:02:05.114024] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.964 [2024-05-15 11:02:05.114050] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.964 [2024-05-15 11:02:05.114066] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.117688] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.964 [2024-05-15 11:02:05.126895] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.964 [2024-05-15 11:02:05.127592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.964 [2024-05-15 11:02:05.127643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.964 [2024-05-15 11:02:05.127661] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.964 [2024-05-15 11:02:05.127910] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.964 [2024-05-15 11:02:05.128175] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.964 [2024-05-15 11:02:05.128201] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.964 [2024-05-15 11:02:05.128219] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.131836] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.964 [2024-05-15 11:02:05.140852] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.964 [2024-05-15 11:02:05.141420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.964 [2024-05-15 11:02:05.141466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.964 [2024-05-15 11:02:05.141486] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.964 [2024-05-15 11:02:05.141735] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.964 [2024-05-15 11:02:05.141997] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.964 [2024-05-15 11:02:05.142023] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.964 [2024-05-15 11:02:05.142039] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.145662] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.964 [2024-05-15 11:02:05.154862] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.964 [2024-05-15 11:02:05.155365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.964 [2024-05-15 11:02:05.155401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.964 [2024-05-15 11:02:05.155420] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.964 [2024-05-15 11:02:05.155664] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.964 [2024-05-15 11:02:05.155911] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.964 [2024-05-15 11:02:05.155954] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.964 [2024-05-15 11:02:05.155972] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.159592] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.964 [2024-05-15 11:02:05.168796] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.964 [2024-05-15 11:02:05.169286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.964 [2024-05-15 11:02:05.169319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.964 [2024-05-15 11:02:05.169338] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.964 [2024-05-15 11:02:05.169580] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.964 [2024-05-15 11:02:05.169827] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.964 [2024-05-15 11:02:05.169852] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.964 [2024-05-15 11:02:05.169874] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.173515] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:48.964 [2024-05-15 11:02:05.182705] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:48.964 [2024-05-15 11:02:05.183210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:48.964 [2024-05-15 11:02:05.183242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:48.964 [2024-05-15 11:02:05.183260] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:48.964 [2024-05-15 11:02:05.183501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:48.964 [2024-05-15 11:02:05.183747] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:48.964 [2024-05-15 11:02:05.183772] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:48.964 [2024-05-15 11:02:05.183789] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:48.964 [2024-05-15 11:02:05.187418] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 [2024-05-15 11:02:05.196699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.197217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.226 [2024-05-15 11:02:05.197249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.226 [2024-05-15 11:02:05.197268] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.226 [2024-05-15 11:02:05.197510] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.226 [2024-05-15 11:02:05.197757] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.226 [2024-05-15 11:02:05.197782] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.226 [2024-05-15 11:02:05.197798] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.226 [2024-05-15 11:02:05.201453] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 [2024-05-15 11:02:05.210644] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.211146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.226 [2024-05-15 11:02:05.211179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.226 [2024-05-15 11:02:05.211198] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.226 [2024-05-15 11:02:05.211440] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.226 [2024-05-15 11:02:05.211687] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.226 [2024-05-15 11:02:05.211711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.226 [2024-05-15 11:02:05.211728] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.226 [2024-05-15 11:02:05.215356] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 [2024-05-15 11:02:05.224542] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.225044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.226 [2024-05-15 11:02:05.225082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.226 [2024-05-15 11:02:05.225101] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.226 [2024-05-15 11:02:05.225342] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.226 [2024-05-15 11:02:05.225590] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.226 [2024-05-15 11:02:05.225614] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.226 [2024-05-15 11:02:05.225630] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2886948 Killed "${NVMF_APP[@]}" "$@" 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@720 -- # xtrace_disable 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:49.226 [2024-05-15 11:02:05.229265] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2888031 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2888031 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@827 -- # '[' -z 2888031 ']' 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@832 -- # local max_retries=100 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # xtrace_disable 00:21:49.226 11:02:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:49.226 [2024-05-15 11:02:05.238463] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.238976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.226 [2024-05-15 11:02:05.239008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.226 [2024-05-15 11:02:05.239026] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.226 [2024-05-15 11:02:05.239268] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.226 [2024-05-15 11:02:05.239513] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.226 [2024-05-15 11:02:05.239538] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.226 [2024-05-15 11:02:05.239554] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.226 [2024-05-15 11:02:05.243184] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 [2024-05-15 11:02:05.252377] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.252856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.226 [2024-05-15 11:02:05.252893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.226 [2024-05-15 11:02:05.252911] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.226 [2024-05-15 11:02:05.253161] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.226 [2024-05-15 11:02:05.253408] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.226 [2024-05-15 11:02:05.253440] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.226 [2024-05-15 11:02:05.253456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.226 [2024-05-15 11:02:05.257084] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 [2024-05-15 11:02:05.266307] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.266836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.226 [2024-05-15 11:02:05.266868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.226 [2024-05-15 11:02:05.266887] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.226 [2024-05-15 11:02:05.267149] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.226 [2024-05-15 11:02:05.267404] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.226 [2024-05-15 11:02:05.267428] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.226 [2024-05-15 11:02:05.267444] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.226 [2024-05-15 11:02:05.271183] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 [2024-05-15 11:02:05.277767] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:21:49.226 [2024-05-15 11:02:05.277834] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.226 [2024-05-15 11:02:05.280228] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.280719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.226 [2024-05-15 11:02:05.280750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.226 [2024-05-15 11:02:05.280768] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.226 [2024-05-15 11:02:05.281022] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.226 [2024-05-15 11:02:05.281268] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.226 [2024-05-15 11:02:05.281292] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.226 [2024-05-15 11:02:05.281309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.226 [2024-05-15 11:02:05.284597] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 [2024-05-15 11:02:05.293481] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.293956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.226 [2024-05-15 11:02:05.293984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.226 [2024-05-15 11:02:05.294006] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.226 [2024-05-15 11:02:05.294261] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.226 [2024-05-15 11:02:05.294463] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.226 [2024-05-15 11:02:05.294482] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.226 [2024-05-15 11:02:05.294495] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.226 [2024-05-15 11:02:05.297514] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.226 [2024-05-15 11:02:05.306739] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.226 [2024-05-15 11:02:05.307193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.307221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.307237] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.307499] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.307701] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.307720] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.307733] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.310715] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.227 [2024-05-15 11:02:05.320113] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.320667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.320707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.320739] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.320991] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.321201] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.321236] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.321248] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.324402] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.334110] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.334571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.334603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.334622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.334883] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.335141] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.335169] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.335184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.338732] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.347985] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.348689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.348741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.348763] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.349013] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.349238] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.349258] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.349271] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.352781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.359443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:49.227 [2024-05-15 11:02:05.361755] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.362334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.362368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.362387] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.362626] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.362828] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.362848] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.362861] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.366420] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.375681] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.376344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.376395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.376416] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.376656] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.376906] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.376940] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.376982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.380370] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.389958] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.390617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.390656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.390673] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.390913] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.391164] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.391185] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.391199] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.394786] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.403926] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.404557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.404601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.404622] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.404884] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.405142] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.405166] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.405180] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.408671] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.417972] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.418490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.418521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.418538] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.418798] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.419073] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.419096] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.419111] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.422754] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.432088] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.432788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.432838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.432868] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.227 [2024-05-15 11:02:05.433118] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.227 [2024-05-15 11:02:05.433367] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.227 [2024-05-15 11:02:05.433390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.227 [2024-05-15 11:02:05.433407] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.227 [2024-05-15 11:02:05.437119] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.227 [2024-05-15 11:02:05.446155] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.227 [2024-05-15 11:02:05.446765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.227 [2024-05-15 11:02:05.446811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.227 [2024-05-15 11:02:05.446833] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.228 [2024-05-15 11:02:05.447100] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.228 [2024-05-15 11:02:05.447338] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.228 [2024-05-15 11:02:05.447360] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.228 [2024-05-15 11:02:05.447375] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.228 [2024-05-15 11:02:05.451070] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.460255] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.460737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.460780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.460799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.461075] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.461311] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.461331] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.488 [2024-05-15 11:02:05.461344] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.488 [2024-05-15 11:02:05.465043] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.474364] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.474849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.474882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.474901] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.475157] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.475407] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.475435] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.488 [2024-05-15 11:02:05.475458] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.488 [2024-05-15 11:02:05.479076] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.485810] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.488 [2024-05-15 11:02:05.485846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.488 [2024-05-15 11:02:05.485869] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.488 [2024-05-15 11:02:05.485882] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.488 [2024-05-15 11:02:05.485894] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.488 [2024-05-15 11:02:05.486155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.488 [2024-05-15 11:02:05.486189] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.488 [2024-05-15 11:02:05.486193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.488 [2024-05-15 11:02:05.488074] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.488555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.488585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.488602] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.488835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.489083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.489106] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.488 [2024-05-15 11:02:05.489122] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.488 [2024-05-15 11:02:05.492423] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.501699] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.502389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.502430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.502451] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.502696] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.502947] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.502970] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.488 [2024-05-15 11:02:05.502988] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.488 [2024-05-15 11:02:05.506384] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.515331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.516006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.516044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.516075] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.516326] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.516546] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.516568] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.488 [2024-05-15 11:02:05.516584] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.488 [2024-05-15 11:02:05.520014] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.528881] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.529505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.529543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.529564] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.529801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.530031] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.530053] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.488 [2024-05-15 11:02:05.530071] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.488 [2024-05-15 11:02:05.533290] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.542543] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.543180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.543227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.543246] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.543486] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.543704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.543726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.488 [2024-05-15 11:02:05.543743] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.488 [2024-05-15 11:02:05.546970] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.556084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.556771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.556837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.556859] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.557131] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.557352] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.557384] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.488 [2024-05-15 11:02:05.557401] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.488 [2024-05-15 11:02:05.560621] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.488 [2024-05-15 11:02:05.569660] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.488 [2024-05-15 11:02:05.570197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.488 [2024-05-15 11:02:05.570237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.488 [2024-05-15 11:02:05.570254] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.488 [2024-05-15 11:02:05.570489] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.488 [2024-05-15 11:02:05.570704] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.488 [2024-05-15 11:02:05.570726] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.570740] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.573923] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.583153] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.583630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.583659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.583676] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.583909] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.584153] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.584176] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.584190] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.587410] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.596679] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.597138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.597167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.597184] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.597415] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.597636] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.597657] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.597671] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.600927] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.610160] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.610600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.610628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.610644] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.610862] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.611120] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.611142] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.611156] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.614387] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.623618] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.624088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.624118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.624134] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.624365] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.624580] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.624601] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.624614] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.627781] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.637185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.637593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.637621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.637637] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.637867] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.638090] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.638112] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.638125] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.641321] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.650617] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.651046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.651093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.651110] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.651347] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.651562] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.651583] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.651596] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.654884] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.664106] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.664557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.664585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.664600] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.664817] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.665055] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.665077] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.665090] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.668293] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.677604] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.678080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.678109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.678125] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.678357] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.678572] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.678592] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.678606] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.681772] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.691169] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.691610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.691638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.691655] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.691873] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.692130] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.692152] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.692171] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.695441] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.704670] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.705145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.489 [2024-05-15 11:02:05.705174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.489 [2024-05-15 11:02:05.705190] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.489 [2024-05-15 11:02:05.705419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.489 [2024-05-15 11:02:05.705633] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.489 [2024-05-15 11:02:05.705654] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.489 [2024-05-15 11:02:05.705667] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.489 [2024-05-15 11:02:05.708837] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.489 [2024-05-15 11:02:05.718374] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.489 [2024-05-15 11:02:05.718800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.490 [2024-05-15 11:02:05.718828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.490 [2024-05-15 11:02:05.718844] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.490 [2024-05-15 11:02:05.719076] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.490 [2024-05-15 11:02:05.719303] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.490 [2024-05-15 11:02:05.719326] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.490 [2024-05-15 11:02:05.719339] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.749 [2024-05-15 11:02:05.722620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.749 [2024-05-15 11:02:05.731918] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.749 [2024-05-15 11:02:05.732394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.749 [2024-05-15 11:02:05.732423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.749 [2024-05-15 11:02:05.732439] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.749 [2024-05-15 11:02:05.732668] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.749 [2024-05-15 11:02:05.732882] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.749 [2024-05-15 11:02:05.732902] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.749 [2024-05-15 11:02:05.732916] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.749 [2024-05-15 11:02:05.736151] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.749 [2024-05-15 11:02:05.745370] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.749 [2024-05-15 11:02:05.745827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.745856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.745873] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.746101] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.746336] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.746357] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.746370] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.749573] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.758882] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.759324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.759353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.759369] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.759598] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.759812] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.759833] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.759846] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.763111] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.772375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.772806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.772849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.772864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.773105] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.773342] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.773364] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.773377] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.776691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.786172] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.786605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.786634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.786650] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.786875] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.787107] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.787129] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.787143] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.790432] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.799741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.800231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.800259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.800275] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.800507] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.800723] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.800744] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.800757] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.803988] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.813244] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.813689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.813718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.813735] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.813994] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.814216] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.814252] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.814266] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.817468] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.826701] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.827160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.827189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.827206] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.827435] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.827649] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.827670] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.827684] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.830972] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.840315] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.840754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.840782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.840798] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.841025] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.841262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.841283] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.841296] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.844494] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.853903] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.854367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.854395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.854411] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.854640] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.854854] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.854875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.854888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.858132] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.867425] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.867870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.867898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.867914] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.750 [2024-05-15 11:02:05.868137] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.750 [2024-05-15 11:02:05.868369] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.750 [2024-05-15 11:02:05.868390] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.750 [2024-05-15 11:02:05.868404] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.750 [2024-05-15 11:02:05.871609] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.750 [2024-05-15 11:02:05.881000] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.750 [2024-05-15 11:02:05.881455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.750 [2024-05-15 11:02:05.881488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.750 [2024-05-15 11:02:05.881504] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.751 [2024-05-15 11:02:05.881734] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.751 [2024-05-15 11:02:05.881973] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.751 [2024-05-15 11:02:05.881995] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.751 [2024-05-15 11:02:05.882009] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.751 [2024-05-15 11:02:05.885221] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.751 [2024-05-15 11:02:05.894511] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.751 [2024-05-15 11:02:05.894959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.751 [2024-05-15 11:02:05.894987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.751 [2024-05-15 11:02:05.895003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.751 [2024-05-15 11:02:05.895220] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.751 [2024-05-15 11:02:05.895453] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.751 [2024-05-15 11:02:05.895474] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.751 [2024-05-15 11:02:05.895487] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.751 [2024-05-15 11:02:05.898691] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.751 [2024-05-15 11:02:05.908084] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.751 [2024-05-15 11:02:05.908571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.751 [2024-05-15 11:02:05.908599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.751 [2024-05-15 11:02:05.908615] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.751 [2024-05-15 11:02:05.908846] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.751 [2024-05-15 11:02:05.909091] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.751 [2024-05-15 11:02:05.909113] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.751 [2024-05-15 11:02:05.909127] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.751 [2024-05-15 11:02:05.912348] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.751 [2024-05-15 11:02:05.921588] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.751 [2024-05-15 11:02:05.922052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.751 [2024-05-15 11:02:05.922080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.751 [2024-05-15 11:02:05.922096] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.751 [2024-05-15 11:02:05.922313] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.751 [2024-05-15 11:02:05.922549] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.751 [2024-05-15 11:02:05.922570] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.751 [2024-05-15 11:02:05.922583] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.751 [2024-05-15 11:02:05.925752] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.751 [2024-05-15 11:02:05.935148] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.751 [2024-05-15 11:02:05.935627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.751 [2024-05-15 11:02:05.935654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.751 [2024-05-15 11:02:05.935670] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.751 [2024-05-15 11:02:05.935901] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.751 [2024-05-15 11:02:05.936147] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.751 [2024-05-15 11:02:05.936170] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.751 [2024-05-15 11:02:05.936184] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.751 [2024-05-15 11:02:05.939403] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.751 [2024-05-15 11:02:05.948631] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.751 [2024-05-15 11:02:05.949093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.751 [2024-05-15 11:02:05.949122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.751 [2024-05-15 11:02:05.949138] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.751 [2024-05-15 11:02:05.949355] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.751 [2024-05-15 11:02:05.949585] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.751 [2024-05-15 11:02:05.949606] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.751 [2024-05-15 11:02:05.949620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.751 [2024-05-15 11:02:05.952822] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.751 [2024-05-15 11:02:05.962257] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.751 [2024-05-15 11:02:05.962728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.751 [2024-05-15 11:02:05.962756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.751 [2024-05-15 11:02:05.962771] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.751 [2024-05-15 11:02:05.963026] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.751 [2024-05-15 11:02:05.963274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.751 [2024-05-15 11:02:05.963295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.751 [2024-05-15 11:02:05.963309] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.751 [2024-05-15 11:02:05.966536] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:49.751 [2024-05-15 11:02:05.975818] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:49.751 [2024-05-15 11:02:05.976276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.751 [2024-05-15 11:02:05.976304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:49.751 [2024-05-15 11:02:05.976320] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:49.751 [2024-05-15 11:02:05.976550] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:49.751 [2024-05-15 11:02:05.976787] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:49.751 [2024-05-15 11:02:05.976813] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:49.751 [2024-05-15 11:02:05.976827] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:49.751 [2024-05-15 11:02:05.980289] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:05.989525] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:05.989959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:05.989989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.011 [2024-05-15 11:02:05.990005] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.011 [2024-05-15 11:02:05.990224] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.011 [2024-05-15 11:02:05.990454] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.011 [2024-05-15 11:02:05.990475] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.011 [2024-05-15 11:02:05.990488] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.011 [2024-05-15 11:02:05.993793] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:06.003047] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:06.003516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:06.003544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.011 [2024-05-15 11:02:06.003561] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.011 [2024-05-15 11:02:06.003778] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.011 [2024-05-15 11:02:06.004038] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.011 [2024-05-15 11:02:06.004060] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.011 [2024-05-15 11:02:06.004074] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.011 [2024-05-15 11:02:06.007310] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:06.016554] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:06.017019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:06.017047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.011 [2024-05-15 11:02:06.017068] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.011 [2024-05-15 11:02:06.017301] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.011 [2024-05-15 11:02:06.017515] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.011 [2024-05-15 11:02:06.017536] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.011 [2024-05-15 11:02:06.017550] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.011 [2024-05-15 11:02:06.020750] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:06.030038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:06.030522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:06.030550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.011 [2024-05-15 11:02:06.030575] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.011 [2024-05-15 11:02:06.030810] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.011 [2024-05-15 11:02:06.031062] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.011 [2024-05-15 11:02:06.031085] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.011 [2024-05-15 11:02:06.031099] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.011 [2024-05-15 11:02:06.034540] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:06.043576] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:06.044042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:06.044071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.011 [2024-05-15 11:02:06.044087] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.011 [2024-05-15 11:02:06.044304] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.011 [2024-05-15 11:02:06.044535] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.011 [2024-05-15 11:02:06.044555] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.011 [2024-05-15 11:02:06.044568] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.011 [2024-05-15 11:02:06.047777] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:06.057050] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:06.057503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:06.057531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.011 [2024-05-15 11:02:06.057547] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.011 [2024-05-15 11:02:06.057777] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.011 [2024-05-15 11:02:06.058019] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.011 [2024-05-15 11:02:06.058046] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.011 [2024-05-15 11:02:06.058061] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.011 [2024-05-15 11:02:06.061286] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:06.070559] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:06.071017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:06.071046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.011 [2024-05-15 11:02:06.071062] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.011 [2024-05-15 11:02:06.071279] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.011 [2024-05-15 11:02:06.071508] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.011 [2024-05-15 11:02:06.071529] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.011 [2024-05-15 11:02:06.071543] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.011 [2024-05-15 11:02:06.074820] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:06.084091] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:06.084560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:06.084588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.011 [2024-05-15 11:02:06.084604] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.011 [2024-05-15 11:02:06.084832] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.011 [2024-05-15 11:02:06.085075] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.011 [2024-05-15 11:02:06.085097] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.011 [2024-05-15 11:02:06.085112] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.011 [2024-05-15 11:02:06.088355] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.011 [2024-05-15 11:02:06.097656] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.011 [2024-05-15 11:02:06.098111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.011 [2024-05-15 11:02:06.098140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.098156] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.098373] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.098602] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.098623] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.098637] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.101852] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.111117] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.111573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.111601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.111617] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.111847] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.112092] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.112115] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.112129] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.115351] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.124591] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.125019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.125048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.125064] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.125296] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.125510] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.125531] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.125545] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.128747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.138201] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.138663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.138691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.138707] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.138925] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.139186] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.139208] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.139222] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.142444] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.151683] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.152178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.152206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.152223] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.152460] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.152675] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.152696] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.152710] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.155928] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.165183] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.165649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.165678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.165695] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.165926] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.166172] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.166194] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.166208] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.169425] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.178693] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.179136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.179165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.179181] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.179411] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.179625] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.179647] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.179660] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.182865] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.192333] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.192781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.192809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.192825] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.193052] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.193274] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.193295] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.193314] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.196557] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.205853] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.206327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.206356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.206372] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.206589] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.012 [2024-05-15 11:02:06.206809] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.012 [2024-05-15 11:02:06.206831] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.012 [2024-05-15 11:02:06.206845] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.012 [2024-05-15 11:02:06.210140] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.012 [2024-05-15 11:02:06.219459] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.012 [2024-05-15 11:02:06.219921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.012 [2024-05-15 11:02:06.219956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.012 [2024-05-15 11:02:06.219982] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.012 [2024-05-15 11:02:06.220200] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.013 [2024-05-15 11:02:06.220421] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.013 [2024-05-15 11:02:06.220442] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.013 [2024-05-15 11:02:06.220456] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.013 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:21:50.013 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@860 -- # return 0 00:21:50.013 11:02:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.013 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:50.013 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.013 [2024-05-15 11:02:06.223773] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.013 [2024-05-15 11:02:06.233038] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.013 [2024-05-15 11:02:06.233489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.013 [2024-05-15 11:02:06.233517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.013 [2024-05-15 11:02:06.233532] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.013 [2024-05-15 11:02:06.233761] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.013 [2024-05-15 11:02:06.234005] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.013 [2024-05-15 11:02:06.234028] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.013 [2024-05-15 11:02:06.234047] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.013 [2024-05-15 11:02:06.237361] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.272 [2024-05-15 11:02:06.246741] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.272 [2024-05-15 11:02:06.247194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.272 [2024-05-15 11:02:06.247222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.272 [2024-05-15 11:02:06.247238] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.272 [2024-05-15 11:02:06.247468] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.272 [2024-05-15 11:02:06.247702] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.272 [2024-05-15 11:02:06.247724] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.272 [2024-05-15 11:02:06.247738] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.272 [2024-05-15 11:02:06.248813] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.272 [2024-05-15 11:02:06.251048] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.272 [2024-05-15 11:02:06.260331] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.272 [2024-05-15 11:02:06.260797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.272 [2024-05-15 11:02:06.260840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.272 [2024-05-15 11:02:06.260858] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.272 [2024-05-15 11:02:06.261123] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.272 [2024-05-15 11:02:06.261357] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.272 [2024-05-15 11:02:06.261378] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.272 [2024-05-15 11:02:06.261393] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.272 [2024-05-15 11:02:06.264607] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.272 [2024-05-15 11:02:06.273891] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.272 [2024-05-15 11:02:06.274339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.272 [2024-05-15 11:02:06.274381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.272 [2024-05-15 11:02:06.274398] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.272 [2024-05-15 11:02:06.274647] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.272 [2024-05-15 11:02:06.274855] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.272 [2024-05-15 11:02:06.274875] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.272 [2024-05-15 11:02:06.274888] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.272 [2024-05-15 11:02:06.278153] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.272 [2024-05-15 11:02:06.287567] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.272 [2024-05-15 11:02:06.288152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.272 [2024-05-15 11:02:06.288186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.272 [2024-05-15 11:02:06.288204] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.272 [2024-05-15 11:02:06.288459] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.272 [2024-05-15 11:02:06.288689] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.272 [2024-05-15 11:02:06.288711] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.272 [2024-05-15 11:02:06.288727] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.272 [2024-05-15 11:02:06.292185] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.272 [2024-05-15 11:02:06.301185] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.272 Malloc0 00:21:50.272 [2024-05-15 11:02:06.301746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.272 [2024-05-15 11:02:06.301779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.272 [2024-05-15 11:02:06.301801] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.272 [2024-05-15 11:02:06.302037] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.272 [2024-05-15 11:02:06.302262] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.272 [2024-05-15 11:02:06.302285] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.272 [2024-05-15 11:02:06.302308] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.272 [2024-05-15 11:02:06.305620] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.272 [2024-05-15 11:02:06.314838] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.272 [2024-05-15 11:02:06.315303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.272 [2024-05-15 11:02:06.315339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ade990 with addr=10.0.0.2, port=4420 00:21:50.272 [2024-05-15 11:02:06.315356] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ade990 is same with the state(5) to be set 00:21:50.272 [2024-05-15 11:02:06.315590] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ade990 (9): Bad file descriptor 00:21:50.272 [2024-05-15 11:02:06.315805] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:50.272 [2024-05-15 11:02:06.315825] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:50.272 [2024-05-15 11:02:06.315839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:50.272 [2024-05-15 11:02:06.319164] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:50.272 [2024-05-15 11:02:06.320989] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:50.272 [2024-05-15 11:02:06.321283] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.272 11:02:06 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2887361 00:21:50.272 [2024-05-15 11:02:06.328438] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.272 [2024-05-15 11:02:06.398895] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:00.241 00:22:00.241 Latency(us) 00:22:00.241 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.241 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:00.241 Verification LBA range: start 0x0 length 0x4000 00:22:00.241 Nvme1n1 : 15.01 6530.47 25.51 10569.46 0.00 7459.92 1468.49 22427.88 00:22:00.241 =================================================================================================================== 00:22:00.241 Total : 6530.47 25.51 10569.46 0.00 7459.92 1468.49 22427.88 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:00.241 rmmod nvme_tcp 00:22:00.241 rmmod nvme_fabrics 00:22:00.241 rmmod nvme_keyring 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2888031 ']' 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2888031 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@946 -- # '[' -z 2888031 ']' 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@950 -- # kill -0 2888031 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # uname 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2888031 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2888031' 00:22:00.241 killing process with pid 2888031 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@965 -- # kill 2888031 00:22:00.241 [2024-05-15 11:02:15.148877] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:00.241 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@970 -- # wait 2888031 00:22:00.242 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:00.242 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:00.242 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:00.242 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:00.242 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:00.242 11:02:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.242 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:00.242 11:02:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.620 11:02:17 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.620 00:22:01.620 real 0m23.912s 00:22:01.620 user 1m3.454s 00:22:01.620 sys 0m4.718s 00:22:01.620 11:02:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:01.620 11:02:17 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:22:01.620 ************************************ 00:22:01.620 END TEST nvmf_bdevperf 00:22:01.620 ************************************ 00:22:01.620 11:02:17 nvmf_tcp -- nvmf/nvmf.sh@121 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:22:01.620 11:02:17 nvmf_tcp -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:01.620 11:02:17 nvmf_tcp -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:01.620 11:02:17 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:01.620 ************************************ 00:22:01.620 START TEST nvmf_target_disconnect 00:22:01.620 ************************************ 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:22:01.620 * Looking for test storage... 00:22:01.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.620 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestinit 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.621 11:02:17 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:04.167 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:04.167 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:22:04.167 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:04.167 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:04.167 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:04.167 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:04.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:04.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:04.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:04.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:04.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:04.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:22:04.168 00:22:04.168 --- 10.0.0.2 ping statistics --- 00:22:04.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.168 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:04.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:04.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:22:04.168 00:22:04.168 --- 10.0.0.1 ping statistics --- 00:22:04.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:04.168 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:04.168 11:02:20 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 ************************************ 00:22:04.169 START TEST nvmf_target_disconnect_tc1 00:22:04.169 ************************************ 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc1 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # set +e 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:04.169 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.169 [2024-05-15 11:02:20.284038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:04.169 [2024-05-15 11:02:20.284108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1543d60 with addr=10.0.0.2, port=4420 00:22:04.169 [2024-05-15 11:02:20.284144] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:04.169 [2024-05-15 11:02:20.284165] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:04.169 [2024-05-15 11:02:20.284179] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:22:04.169 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:22:04.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:22:04.169 Initializing NVMe Controllers 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # trap - ERR 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@33 -- # print_backtrace 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # [[ hxBET =~ e ]] 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1149 -- # return 0 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@41 -- # set -e 00:22:04.169 00:22:04.169 real 0m0.099s 00:22:04.169 user 0m0.045s 00:22:04.169 sys 0m0.054s 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 ************************************ 00:22:04.169 END TEST nvmf_target_disconnect_tc1 00:22:04.169 ************************************ 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 ************************************ 00:22:04.169 START TEST nvmf_target_disconnect_tc2 00:22:04.169 ************************************ 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1121 -- # nvmf_target_disconnect_tc2 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2891593 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2891593 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2891593 ']' 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:04.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:04.169 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.169 [2024-05-15 11:02:20.390201] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:22:04.169 [2024-05-15 11:02:20.390305] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:04.428 EAL: No free 2048 kB hugepages reported on node 1 00:22:04.428 [2024-05-15 11:02:20.467495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:04.428 [2024-05-15 11:02:20.574271] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:04.428 [2024-05-15 11:02:20.574323] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:04.428 [2024-05-15 11:02:20.574351] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:04.428 [2024-05-15 11:02:20.574362] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:04.428 [2024-05-15 11:02:20.574372] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:04.428 [2024-05-15 11:02:20.574458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:04.428 [2024-05-15 11:02:20.577948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:04.428 [2024-05-15 11:02:20.578024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:04.428 [2024-05-15 11:02:20.578029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.686 Malloc0 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.686 [2024-05-15 11:02:20.760000] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.686 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.687 [2024-05-15 11:02:20.787994] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:04.687 [2024-05-15 11:02:20.788289] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # reconnectpid=2891623 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@52 -- # sleep 2 00:22:04.687 11:02:20 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:04.687 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.588 11:02:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@53 -- # kill -9 2891593 00:22:06.588 11:02:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@55 -- # sleep 2 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Read completed with error (sct=0, sc=8) 00:22:06.588 starting I/O failed 00:22:06.588 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 [2024-05-15 11:02:22.814438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 [2024-05-15 11:02:22.814781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 [2024-05-15 11:02:22.815153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Read completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.589 starting I/O failed 00:22:06.589 Write completed with error (sct=0, sc=8) 00:22:06.590 starting I/O failed 00:22:06.590 [2024-05-15 11:02:22.815488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:22:06.590 [2024-05-15 11:02:22.815810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.815840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.816035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.816064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.816268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.816296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.816487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.816514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.816763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.816793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.817019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.817047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.817242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.817269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.817494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.817521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.817797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.817846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.818092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.818119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.818298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.818324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.818610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.818655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.818872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.818900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.819103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.819130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.819385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.819411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.819627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.819669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.819899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.819925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.820131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.820156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.820395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.820421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.590 [2024-05-15 11:02:22.820773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.590 [2024-05-15 11:02:22.820823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.590 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.821035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.821061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.821247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.821273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.821475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.821501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.821772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.821817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.822061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.822087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.822290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.822315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.822539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.822581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.822798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.822826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.823093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.823119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.823307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.823333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.823647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.823705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.823912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.823943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.824143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.824169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.824410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.824438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.824815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.824873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.825117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.865 [2024-05-15 11:02:22.825144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.865 qpair failed and we were unable to recover it. 00:22:06.865 [2024-05-15 11:02:22.825370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.825395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.825631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.825675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.825903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.825945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.826155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.826180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.826420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.826445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.826685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.826710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.826940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.826966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.827161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.827187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.827453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.827481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.827834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.827885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.828114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.828140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.828341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.828368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.828549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.828575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.828918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.828983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.829175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.829200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.829438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.829466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.829835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.829883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.830098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.830125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.830342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.830367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.830673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.830719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.830982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.831008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.831232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.831257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.831470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.831495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.831837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.831886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.832104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.832130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.832323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.832348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.832562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.832587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.832795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.832821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.833037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.833063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.833266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.833295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.833498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.833526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.833754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.833779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.833997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.834023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.866 qpair failed and we were unable to recover it. 00:22:06.866 [2024-05-15 11:02:22.834233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.866 [2024-05-15 11:02:22.834258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.834442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.834467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.834686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.834711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.834899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.834923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.835164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.835190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.835380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.835405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.835614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.835640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.835813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.835837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.836044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.836070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.836285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.836310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.836490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.836515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.836774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.836822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.837065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.837091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.837302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.837328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.837570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.837595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.837830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.837871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.838105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.838131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.838319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.838346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.838581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.838609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.838866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.838892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.839079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.839105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.839285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.839310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.839549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.839574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.839848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.839877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.840083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.840109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.840288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.840314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.840530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.840555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.840742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.840767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.841022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.841049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.841235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.841261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.841458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.841487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.841726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.841751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.841988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.867 [2024-05-15 11:02:22.842017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.867 qpair failed and we were unable to recover it. 00:22:06.867 [2024-05-15 11:02:22.842240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.842265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.842452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.842477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.842653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.842678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.842890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.842915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.843134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.843162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.843355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.843381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.843602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.843627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.843806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.843831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.844047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.844072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.844284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.844309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.844519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.844544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.844790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.844815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.845030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.845072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.845291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.845316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.845563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.845588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.845820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.845845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.846047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.846073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.846265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.846292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.846535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.846563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.846800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.846825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.847037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.847063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.847319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.847347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.847584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.847610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.847847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.847876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.848087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.848114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.848344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.848369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.848595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.848623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.848849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.848877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.849113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.849139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.868 [2024-05-15 11:02:22.849374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.868 [2024-05-15 11:02:22.849401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.868 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.849642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.849670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.849938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.849969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.850153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.850180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.850384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.850409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.850624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.850649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.850829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.850855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.851064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.851091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.851302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.851326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.851509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.851534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.851766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.851794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.852033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.852059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.852275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.852301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.852533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.852560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.852785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.852812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.853051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.853076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.853290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.853315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.853517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.853542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.853755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.853780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.853990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.854019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.854219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.854244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.854511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.854539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.854807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.854832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.855088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.855116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.855387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.855415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.855652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.855677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.855891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.855916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.856161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.856187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.856399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.856424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.856667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.856696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.856888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.856913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.857142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.857169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.857355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.857382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.857592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.857618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.857823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.857847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.858036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.858062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.869 qpair failed and we were unable to recover it. 00:22:06.869 [2024-05-15 11:02:22.858269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.869 [2024-05-15 11:02:22.858293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.858497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.858525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.858723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.858749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.858971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.859000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.859206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.859237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.859462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.859487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.859696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.859724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.859972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.860002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.860214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.860239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.860474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.860499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.860680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.860705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.860918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.860949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.861182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.861209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.861417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.861445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.861699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.861724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.861924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.861956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.862164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.862190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.862375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.862401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.862599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.862625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.862820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.862847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.863061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.863091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.863306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.863330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.863516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.863542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.863729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.870 [2024-05-15 11:02:22.863756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.870 qpair failed and we were unable to recover it. 00:22:06.870 [2024-05-15 11:02:22.864014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.864041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.864239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.864268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.864465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.864490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.864678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.864704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.864918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.864963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.865224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.865250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.865485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.865514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.865745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.865772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.866010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.866036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.866269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.866297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.866513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.866539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.866742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.866767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.866957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.866983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.867188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.867213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.867467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.867493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.867754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.867782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.868020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.868049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.868257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.868282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.868492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.868517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.868730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.868758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.868971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.868997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.869256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.869285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.869476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.869504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.869712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.869736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.869949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.869979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.870221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.870247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.870487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.870512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.870720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.871 [2024-05-15 11:02:22.870748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.871 qpair failed and we were unable to recover it. 00:22:06.871 [2024-05-15 11:02:22.870951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.870980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.871212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.871237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.871480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.871505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.871709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.871734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.871976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.872003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.872193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.872218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.872436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.872464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.872726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.872752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.872954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.872981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.873176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.873217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.873453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.873478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.873661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.873686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.873861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.873886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.874098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.874124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.874362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.874390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.874624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.874651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.874881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.874906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.875102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.875127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.875334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.875359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.875540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.875566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.875807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.875832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.876043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.876069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.876258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.876284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.876497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.876525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.876715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.876743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.877009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.877035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.877297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.877322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.877558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.877583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.877819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.877844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.878097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.878123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.878366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.878394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.878625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.878649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.878891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.878918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.879192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.879221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.872 qpair failed and we were unable to recover it. 00:22:06.872 [2024-05-15 11:02:22.879449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.872 [2024-05-15 11:02:22.879476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.879727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.879752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.879944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.879976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.880200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.880226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.880447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.880473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.880685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.880726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.880945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.880971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.881183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.881211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.881409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.881437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.881669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.881694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.881939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.881965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.882172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.882197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.882371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.882396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.882598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.882623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.882829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.882856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.883081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.883107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.883324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.883364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.883600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.883624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.883802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.883829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.884066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.884096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.884331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.884360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.884572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.884597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.884782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.884807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.885043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.885071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.885338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.885363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.885604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.885634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.885905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.885936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.886147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.886172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.886409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.886435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.886621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.886651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.873 [2024-05-15 11:02:22.886834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.873 [2024-05-15 11:02:22.886860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.873 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.887061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.887090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.887317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.887345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.887604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.887629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.887817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.887842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.888050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.888078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.888263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.888291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.888528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.888553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.888765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.888794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.888999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.889026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.889219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.889243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.889454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.889480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.889668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.889693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.889941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.889970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.890169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.890198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.890427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.890453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.890709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.890734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.890966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.890992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.891180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.891205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.891412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.891437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.891650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.891678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.891936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.891962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.892196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.892222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.892428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.892456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.892703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.892728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.892947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.892976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.893218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.893248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.893428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.893453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.893719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.893747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.894006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.894032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.894218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.894244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.894421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.894448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.894690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.894716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.894907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.894939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.895151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.895177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.895420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.895448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.874 qpair failed and we were unable to recover it. 00:22:06.874 [2024-05-15 11:02:22.895656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.874 [2024-05-15 11:02:22.895681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.895883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.895907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.896106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.896135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.896333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.896358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.896542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.896568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.896747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.896774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.897055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.897081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.897328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.897354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.897601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.897629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.897855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.897881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.898065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.898091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.898307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.898336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.898562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.898587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.898849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.898877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.899096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.899122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.899336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.899361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.899567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.899596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.899852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.899877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.900093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.900119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.900326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.900351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.900568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.900598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.900830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.900857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.901076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.901102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.901286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.901310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.901498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.901523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.901729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.901757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.901986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.902015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.902252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.902276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.875 qpair failed and we were unable to recover it. 00:22:06.875 [2024-05-15 11:02:22.902490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.875 [2024-05-15 11:02:22.902515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.902689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.902714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.902952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.902978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.903186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.903233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.903436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.903466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.903725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.903750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.903985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.904013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.904272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.904298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.904507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.904532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.904746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.904772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.905008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.905039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.905239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.905266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.905456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.905480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.905685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.905712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.905949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.905975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.906191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.906216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.906423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.906449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.906667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.906692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.906869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.906895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.907107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.907136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.907359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.907384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.907585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.907613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.907843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.907868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.908058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.908084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.908322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.908350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.876 [2024-05-15 11:02:22.908572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.876 [2024-05-15 11:02:22.908600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.876 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.908823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.908848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.909079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.909108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.909365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.909390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.909607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.909633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.909851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.909882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.910119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.910145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.910376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.910402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.910674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.910699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.910907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.910937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.911146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.911171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.911377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.911405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.911633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.911661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.911896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.911922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.912176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.912204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.912413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.912441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.912679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.912704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.912973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.913002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.913244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.913272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.913503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.913530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.913770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.913798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.914041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.914070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.914268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.914293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.914490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.914518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.914745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.914772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.915001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.915044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.915300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.915328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.915565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.915593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.915803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.915828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.916062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.916088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.916362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.916391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.916630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.916655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.916893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.916926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.877 qpair failed and we were unable to recover it. 00:22:06.877 [2024-05-15 11:02:22.917179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.877 [2024-05-15 11:02:22.917205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.917388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.917413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.917622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.917647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.917857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.917886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.918128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.918153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.918357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.918385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.918620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.918645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.918833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.918859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.919071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.919100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.919308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.919335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.919562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.919587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.919818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.919846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.920082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.920109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.920353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.920379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.920589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.920617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.920825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.920853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.921058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.921083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.921290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.921318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.921580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.921606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.921839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.921864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.922051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.922077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.922279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.922307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.922538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.922564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.922793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.922821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.923056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.923086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.923291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.923316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.923552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.923581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.923843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.923868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.924054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.924080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.924330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.924358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.924562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.924590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.878 [2024-05-15 11:02:22.924823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.878 [2024-05-15 11:02:22.924848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.878 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.925053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.925082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.925314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.925341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.925573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.925599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.925830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.925860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.926125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.926151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.926370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.926395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.926655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.926683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.926890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.926918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.927127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.927153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.927354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.927381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.927570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.927598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.927830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.927855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.928094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.928124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.928387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.928412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.928624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.928649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.928861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.928902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.929159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.929185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.929363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.929388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.929623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.929653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.929878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.929906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.930148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.930175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.930385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.930410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.930665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.930694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.930925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.930960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.931187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.931228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.931469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.931494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.931701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.931726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.931941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.931967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.932158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.932184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.932370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.932395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.932654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.932682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.932940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.932966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.933149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.933174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.933407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.933437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.879 [2024-05-15 11:02:22.933694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.879 [2024-05-15 11:02:22.933719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.879 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.933938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.933968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.934187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.934213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.934424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.934450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.934661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.934687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.934936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.934966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.935229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.935255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.935469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.935494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.935732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.935759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.936002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.936031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.936263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.936288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.936528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.936553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.936746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.936773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.937013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.937039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.937282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.937307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.937515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.937541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.937719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.937744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.937936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.937962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.938193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.938221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.938424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.938450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.938662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.938687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.938919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.938950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.939161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.939186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.939419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.939447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.939670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.939698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.939938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.939964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.940171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.940199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.940457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.940485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.940717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.940746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.880 [2024-05-15 11:02:22.940954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.880 [2024-05-15 11:02:22.940981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.880 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.941196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.941224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.941453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.941478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.941708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.941736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.941945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.941974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.942204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.942230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.942481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.942509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.942768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.942793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.942999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.943025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.943236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.943277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.943506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.943534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.943785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.943810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.944042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.944070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.944279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.944307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.944535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.944561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.944751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.944778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.945023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.945053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.945294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.945319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.945528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.945554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.945759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.945786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.946047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.946073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.946304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.946329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.946568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.946598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.946804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.946831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.947057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.947083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.947314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.947343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.947537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.947566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.947803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.947827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.948032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.948057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.948273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.948297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.948479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.948504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.948716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.948744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.948956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.881 [2024-05-15 11:02:22.948983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.881 qpair failed and we were unable to recover it. 00:22:06.881 [2024-05-15 11:02:22.949220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.949245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.949478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.949508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.949784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.949809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.950051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.950080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.950286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.950311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.950521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.950546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.950751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.950776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.951022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.951051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.951290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.951316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.951546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.951574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.951798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.951826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.952028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.952054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.952316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.952344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.952543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.952570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.952801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.952827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.953071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.953098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.953373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.953401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.953640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.953665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.953865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.953890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.954109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.954138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.954369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.954395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.954641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.882 [2024-05-15 11:02:22.954666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.882 qpair failed and we were unable to recover it. 00:22:06.882 [2024-05-15 11:02:22.954928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.954963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.955208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.955233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.955522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.955548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.955789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.955818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.956047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.956073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.956281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.956306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.956520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.956548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.956750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.956775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.957039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.957068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.957306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.957331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.957569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.957594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.957806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.957833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.958073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.958103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.958347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.958372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.958613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.958642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.958873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.958901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.959159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.959185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.959385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.959411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.959588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.959613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.959816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.959841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.960093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.960119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.960358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.960388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.960594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.960621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.960811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.960836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.961069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.961095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.961285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.961310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.961577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.961606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.961816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.961842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.962055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.962081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.962358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.962383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.962623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.962651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.962888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.962914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.883 [2024-05-15 11:02:22.963128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.883 [2024-05-15 11:02:22.963157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.883 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.963415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.963443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.963679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.963704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.963941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.963970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.964168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.964196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.964453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.964478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.964679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.964708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.964917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.964954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.965165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.965191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.965400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.965428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.965637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.965665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.965919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.965962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.966211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.966236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.966449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.966475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.966685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.966710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.966945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.966988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.967167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.967192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.967442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.967467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.967711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.967737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.967939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.967965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.968202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.968227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.968468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.968494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.968676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.968701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.968883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.968910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.969134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.969160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.969374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.969399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.969574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.969598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.969783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.969809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.970065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.970094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.970326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.970351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.970577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.970605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.970835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.970861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.971056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.971081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.971315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.971344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.971572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.971605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.971860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.971885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.972138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.972163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.972429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.972457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.972713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.972739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.884 [2024-05-15 11:02:22.972954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.884 [2024-05-15 11:02:22.972981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.884 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.973197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.973224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.973454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.973479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.973713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.973741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.974009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.974037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.974299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.974324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.974576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.974604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.974834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.974862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.975101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.975127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.975347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.975375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.975595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.975624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.975864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.975889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.976163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.976189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.976458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.976483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.976727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.976753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.977003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.977032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.977291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.977319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.977561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.977586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.977853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.977881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.978149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.978178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.978380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.978406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.978644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.978669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.978911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.978950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.979217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.979243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.979450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.979478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.979714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.979739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.979984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.980010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.980247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.980275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.980506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.980534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.980765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.980790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.981038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.981064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.981334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.981359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.981574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.981600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.981776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.981803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.982041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.982070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.982302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.885 [2024-05-15 11:02:22.982328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.885 qpair failed and we were unable to recover it. 00:22:06.885 [2024-05-15 11:02:22.982570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.982599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.982856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.982884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.983117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.983143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.983363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.983390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.983619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.983647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.983860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.983887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.984097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.984123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.984321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.984347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.984530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.984556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.984790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.984818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.985035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.985062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.985271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.985296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.985537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.985562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.985772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.985814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.986052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.986077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.986290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.986318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.986534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.986560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.986754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.986780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.986963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.986989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.987196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.987221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.987486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.987511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.987746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.987774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.988039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.988065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.988280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.988306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.988547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.988574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.988773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.988801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.989064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.989090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.989328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.989361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.989595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.989623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.989860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.989886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.990127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.990152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.990429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.990458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.990659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.990685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.990864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.886 [2024-05-15 11:02:22.990888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.886 qpair failed and we were unable to recover it. 00:22:06.886 [2024-05-15 11:02:22.991092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.991118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.991312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.991337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.991543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.991572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.991830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.991855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.992094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.992120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.992325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.992352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.992608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.992633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.992816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.992840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.993081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.993107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.993368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.993396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.993628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.993653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.993894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.993922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.994191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.994216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.994394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.994419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.994628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.994656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.994870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.994898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.995113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.995139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.995380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.995408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.995617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.995642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.995855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.995880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.996089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.996119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.887 qpair failed and we were unable to recover it. 00:22:06.887 [2024-05-15 11:02:22.996367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.887 [2024-05-15 11:02:22.996392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.996600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.996624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.996884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.996912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.997198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.997224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.997472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.997498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.997732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.997757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.997994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.998020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.998238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.998265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.998506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.998536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.998796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.998824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.999064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.999090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.999345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.999373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.999637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.999663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:22.999882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:22.999907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.000097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.000124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.000360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.000388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.000646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.000671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.000906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.000943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.001177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.001203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.001450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.001475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.001673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.001699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.001912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.001948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.002146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.002172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.002402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.002430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.002662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.002691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.002963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.002993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.003245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.003271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.003531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.003560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.003907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.003979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.004255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.004280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.004526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.004551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.004819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.888 [2024-05-15 11:02:23.004848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.888 qpair failed and we were unable to recover it. 00:22:06.888 [2024-05-15 11:02:23.005086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.005112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.005301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.005328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.005599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.005625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.005868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.005896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.006117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.006146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.006374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.006403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.006607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.006633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.006843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.006868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.007122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.007151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.007412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.007438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.007654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.007680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.007921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.007956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.008193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.008222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.008483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.008508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.008689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.008716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.008956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.008985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.009213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.009241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.009476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.009502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.009712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.009737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.009942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.009971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.010178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.010205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.010431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.010459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.010694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.010720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.010935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.010964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.011169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.011197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.011422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.011450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.011712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.011737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.012009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.012035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.012248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.012276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.012467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.012495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.012723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.012748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.013000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.013026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.013211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.013237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.013443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.013485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.013708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.013733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.013939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.013972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.014202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.014230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.014486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.014514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.014767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.889 [2024-05-15 11:02:23.014792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.889 qpair failed and we were unable to recover it. 00:22:06.889 [2024-05-15 11:02:23.015006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.015035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.015348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.015373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.015587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.015612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.015821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.015847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.016112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.016141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.016419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.016447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.016683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.016708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.016908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.016947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.017122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.017148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.017359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.017386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.017632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.017660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.017894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.017920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.018168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.018194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.018402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.018431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.018631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.018659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.018914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.018945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.019217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.019243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.019556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.019616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.019874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.019899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.020095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.020121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.020312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.020337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.020569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.020598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.021013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.021042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.021267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.021298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.021499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.021527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.021764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.021807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.022051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.022078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.022265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.022291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.022479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.022504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.022720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.022745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.023006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.023036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.890 [2024-05-15 11:02:23.023271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.890 [2024-05-15 11:02:23.023296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.890 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.023485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.023510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.023700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.023725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.023942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.023971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.024180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.024205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.024444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.024472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.024765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.024793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.025052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.025078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.025255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.025283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.025515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.025543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.025919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.025970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.026201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.026230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.026465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.026490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.026734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.026759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.026965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.026991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.027242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.027268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.027475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.027500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.027731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.027758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.028006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.028035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.028273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.028307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.028563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.028588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.028794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.028824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.029074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.029104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.029314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.029341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.029575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.029601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.029808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.029836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.030068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.030097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.030327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.030352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.030554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.030579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.030753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.030778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.030994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.031020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.031201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.031226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.031408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.031434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.031652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.031681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.031886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.031914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.891 qpair failed and we were unable to recover it. 00:22:06.891 [2024-05-15 11:02:23.032157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.891 [2024-05-15 11:02:23.032185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.032432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.032458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.032731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.032756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.032971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.033014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.033245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.033273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.033486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.033511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.033778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.033803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.033985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.034011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.034203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.034229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.034477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.034504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.034703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.034728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.034914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.034945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.035166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.035194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.035426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.035452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.035701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.035726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.035998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.036026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.036267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.036292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.036524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.036549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.036780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.036808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.037072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.037098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.037276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.892 [2024-05-15 11:02:23.037301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.892 qpair failed and we were unable to recover it. 00:22:06.892 [2024-05-15 11:02:23.037514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.037540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.037819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.037845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.038085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.038113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.038344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.038372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.038601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.038627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.038867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.038894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.039139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.039165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.039393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.039421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.039667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.039692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.039905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.039946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.040172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.040201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.040397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.040426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.040654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.040680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.040885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.040913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.041178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.041204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.041413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.041442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.041692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.041717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.041993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.042022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.042260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.042289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.042517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.042546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.042785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.042810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.043045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.043074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.043356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.043384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.043585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.043613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.043843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.043868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.044138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.044167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.044552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.044616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.044861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.044889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.045131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.045156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.045408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.045437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.045719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.045775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.046027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.893 [2024-05-15 11:02:23.046060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.893 qpair failed and we were unable to recover it. 00:22:06.893 [2024-05-15 11:02:23.046272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.046297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.046513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.046554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.046814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.046840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.047084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.047113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.047355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.047380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.047614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.047642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.047882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.047907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.048092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.048120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.048337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.048362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.048606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.048631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.048846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.048871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.049085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.049114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.049371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.049396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.049615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.049644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.049911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.049943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.050197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.050222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.050454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.050480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.050763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.050790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.051075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.051101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.051343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.051371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.051594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.051619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.051834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.051859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.052054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.052080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.052318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.052346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.052607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.052633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.052833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.052861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.053084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.053118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.053352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.053380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.053638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.053663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.053911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.053942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.054134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.054159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.894 [2024-05-15 11:02:23.054373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.894 [2024-05-15 11:02:23.054401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.894 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.054663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.054689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.054889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.054914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.055143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.055169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.055415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.055443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.055699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.055725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.055902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.055927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.056177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.056207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.056467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.056496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.056739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.056764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.057003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.057032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.057253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.057281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.057536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.057564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.057797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.057822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.058061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.058090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.058382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.058407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.058639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.058664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.058938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.058967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.059201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.059245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.059657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.059706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.059939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.059969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.060178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.060204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.060462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.060490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.060807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.060832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.061071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.061100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.061313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.061338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.061584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.061609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.061842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.061870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.062126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.062152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.062331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.062356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.062565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.895 [2024-05-15 11:02:23.062593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.895 qpair failed and we were unable to recover it. 00:22:06.895 [2024-05-15 11:02:23.063000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.063029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.063260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.063288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.063524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.063549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.063784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.063812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.064027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.064053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.064282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.064311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.064551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.064576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.064789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.064816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.065064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.065091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.065328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.065356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.065569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.065594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.065826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.065867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.066068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.066101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.066303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.066332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.066549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.066576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.066944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.066994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.067197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.067225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.067462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.067490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.067718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.067744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.067951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.067981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.068186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.068215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.068472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.068497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.068708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.068734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.068996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.069025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.069260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.069289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.069531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.069559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.069775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.069800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.070041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.070070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.070287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.070312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.070509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.070539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.070744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.070771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.071023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.071050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.071265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.071296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.896 [2024-05-15 11:02:23.071515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.896 [2024-05-15 11:02:23.071543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.896 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.071776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.071801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.072082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.072111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.072370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.072395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.072621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.072649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.072868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.072893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.073152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.073178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.073396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.073438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.073662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.073690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.073919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.073954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.074228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.074253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.074668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.074725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.074962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.074988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.075211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.075237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.075517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.075542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.075786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.075814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.076082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.076111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.076340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.076366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.076630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.076661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.076953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.076980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.077244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.077272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.077503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.077529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.077763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.077792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.078104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.078135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.078379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.078407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.078639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.078665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.078870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.078903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.079049] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a0b0 is same with the state(5) to be set 00:22:06.897 [2024-05-15 11:02:23.079377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.897 [2024-05-15 11:02:23.079421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:06.897 qpair failed and we were unable to recover it. 00:22:06.897 [2024-05-15 11:02:23.079638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.898 [2024-05-15 11:02:23.079665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:06.898 qpair failed and we were unable to recover it. 00:22:06.898 [2024-05-15 11:02:23.079866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.898 [2024-05-15 11:02:23.079892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:06.898 qpair failed and we were unable to recover it. 00:22:06.898 [2024-05-15 11:02:23.080095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.898 [2024-05-15 11:02:23.080123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:06.898 qpair failed and we were unable to recover it. 00:22:06.898 [2024-05-15 11:02:23.080374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.898 [2024-05-15 11:02:23.080400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:06.898 qpair failed and we were unable to recover it. 00:22:06.898 [2024-05-15 11:02:23.080619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.898 [2024-05-15 11:02:23.080650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:06.898 qpair failed and we were unable to recover it. 00:22:06.898 [2024-05-15 11:02:23.080895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.898 [2024-05-15 11:02:23.080924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:06.898 qpair failed and we were unable to recover it. 00:22:06.898 [2024-05-15 11:02:23.081147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.898 [2024-05-15 11:02:23.081174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:06.898 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.081385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.081413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.081602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.081628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.081815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.081841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.082039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.082066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.082368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.082420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.082677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.082703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.082918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.082954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.083157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.083187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.083418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.083444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.083636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.083664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.083850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.083877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.084096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.084123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.084364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.084390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.084607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.084636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.084840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.084865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.085057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.085084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.085274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.085301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.085486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.085513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.085758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.085785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.171 [2024-05-15 11:02:23.085999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.171 [2024-05-15 11:02:23.086030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.171 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.086240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.086267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.086453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.086481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.086665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.086692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.086934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.086961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.087181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.087211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.087413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.087443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.087674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.087700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.087918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.087952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.088189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.088219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.088451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.088478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.088694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.088719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.088905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.088937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.089119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.089146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.089357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.089382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.089598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.089624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.089808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.089836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.090025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.090052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.090241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.090267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.090505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.090531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.090799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.090828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.091043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.091072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.091308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.091335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.091566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.091592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.091806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.091836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.092065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.092097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.092308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.092337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.092592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.092622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.092824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.092849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.093061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.093088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.093274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.093301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.093483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.093510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.093725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.093751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.093990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.094017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.094217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.094243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.094454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.094480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.094693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.094720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.094936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.094966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.095198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.095224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.095470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.095499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.095730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.095756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.095968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.095995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.096207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.096233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.096436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.096462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.096679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.096705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.096920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.096955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.097144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.097170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.097361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.097387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.097596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.097621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.097845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.097884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.098151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.098182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.098384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.098413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.098670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.098697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.098954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.098982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.099185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.099211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.099417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.099443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.099733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.099761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.099984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.100015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.172 [2024-05-15 11:02:23.100279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.172 [2024-05-15 11:02:23.100305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.172 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.100513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.100539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.100799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.100830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.101086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.101113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.101367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.101394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.101607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.101634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.101832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.101858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.102043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.102074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.102310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.102336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.102616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.102642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.102837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.102862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.103067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.103095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.103300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.103325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.103537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.103563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.103752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.103778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.104044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.104086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.104309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.104337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.104550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.104593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.104918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.104983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.105170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.105197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.105419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.105462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.105840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.105892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.106123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.106149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.106393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.106437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.106716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.106745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.107017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.107045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.107267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.107309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.107652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.107712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.107962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.107989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.108261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.108304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.108551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.108595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.108835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.108879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.109094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.109121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.109361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.109403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.109691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.109734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.109988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.110015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.110228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.110254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.110474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.110519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.110788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.110832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.111048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.111075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.111345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.111391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.111637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.111680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.111907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.111938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.112154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.112180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.112415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.112457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.112740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.112795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.113043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.113087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.113408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.113474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.113753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.113796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.114026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.114070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.114312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.114355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.114623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.114666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.114855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.114881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.115100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.115145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.115383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.115426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.115668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.115711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.115954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.115982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.116223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.116266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.116510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.116538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.116781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.116825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.173 [2024-05-15 11:02:23.117045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.173 [2024-05-15 11:02:23.117090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.173 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.117309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.117352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.117593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.117636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.117820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.117845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.118059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.118087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.118310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.118355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.118626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.118669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.118901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.118937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.119125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.119152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.119392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.119434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.119704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.119746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.119990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.120017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.120260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.120304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.120538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.120583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.120829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.120855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.121071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.121115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.121354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.121397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.121631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.121675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.121856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.121882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.122118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.122145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.122385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.122428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.122690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.122733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.122941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.122978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.123221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.123264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.123535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.123578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.123834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.123878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.124088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.124115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.124366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.124414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.124657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.124699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.124886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.124912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.125132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.125158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.125395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.125436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.125674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.125718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.125936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.125963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.126155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.126181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.126420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.126463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.126694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.126737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.126951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.126978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.127216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.127259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.127500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.127543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.127785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.127828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.128053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.128081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.128303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.128347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.128618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.128661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.128844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.128870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.129056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.129084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.129347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.129390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.129654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.129698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.129887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.174 [2024-05-15 11:02:23.129915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.174 qpair failed and we were unable to recover it. 00:22:07.174 [2024-05-15 11:02:23.130107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.130134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.130376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.130419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.130629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.130672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.130885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.130911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.131152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.131195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.131435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.131479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.131747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.131792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.132008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.132035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.132252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.132297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.132505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.132549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.132782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.132826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.133041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.133084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.133309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.133353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.133622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.133665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.133861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.133887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.134104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.134148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.134407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.134450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.134692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.134736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.134975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.135024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.135265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.135307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.135572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.135616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.135828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.135853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.136114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.136159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.136425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.136468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.136718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.136744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.136922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.136955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.137168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.137194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.137413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.137456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.137700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.137728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.137936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.137963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.138178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.138207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.138464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.138508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.138754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.138797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.139040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.139084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.139339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.139367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.139614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.139658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.139896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.139922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.140192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.140236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.140477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.140519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.140789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.140834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.141103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.141147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.141439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.141484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.141751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.141795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.142044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.142071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.142284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.142327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.142594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.142638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.142846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.142872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.143088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.143115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.143319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.143362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.143572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.143618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.143808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.143835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.144069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.144113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.144379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.144408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.144680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.144724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.144965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.144992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.145235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.145279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.145517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.145560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.145769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.145812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.146033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.146065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.146274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.175 [2024-05-15 11:02:23.146317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.175 qpair failed and we were unable to recover it. 00:22:07.175 [2024-05-15 11:02:23.146527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.146571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.146837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.146880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.147099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.147126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.147393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.147436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.147650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.147693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.147943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.147970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.148177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.148203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.148442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.148484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.148712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.148754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.149003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.149030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.149245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.149271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.149496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.149538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.149810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.149854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.150044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.150071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.150285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.150329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.150601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.150645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.150829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.150854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.151064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.151090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.151306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.151349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.151558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.151601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.151832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.151874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.152087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.152114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.152348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.152392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.152594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.152637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.152849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.152875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.153143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.153187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.153425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.153468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.153694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.153737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.153919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.153953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.154193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.154218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.154467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.154496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.154758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.154801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.155009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.155036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.155281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.155324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.155562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.155607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.155844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.155887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.156139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.156166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.156407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.156451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.156653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.156701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.156947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.156974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.157213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.157240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.157450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.157494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.157741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.157784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.158056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.158101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.158378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.158408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.158642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.158685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.158909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.158943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.159184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.159214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.159465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.159509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.159746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.159790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.159998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.160042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.160284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.160328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.160601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.160644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.160886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.160912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.161115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.161141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.161345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.161390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.161635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.161678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.161917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.161960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.162207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.162233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.176 [2024-05-15 11:02:23.162506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.176 [2024-05-15 11:02:23.162549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.176 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.162824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.162850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.163084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.163111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.163383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.163412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.163664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.163706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.163904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.163937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.164158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.164202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.164439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.164483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.164763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.164807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.165049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.165094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.165340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.165384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.165630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.165675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.165884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.165909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.166119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.166162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.166378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.166422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.166661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.166704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.166912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.166947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.167179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.167222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.167487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.167530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.167743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.167791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.168021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.168065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.168280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.168324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.168547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.168589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.168826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.168851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.169056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.169101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.169343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.169388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.169659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.169701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.169886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.169913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.170176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.170219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.170434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.170477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.170715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.170759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.170968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.170995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.171237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.171264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.171513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.171558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.171770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.171796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.172055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.172100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.172361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.172404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.172613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.172656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.172894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.172920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.173147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.173192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.173453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.173482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.173763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.173806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.174037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.174080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.174345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.174374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.174640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.174683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.174894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.174920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.175170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.175215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.175418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.175462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.175701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.175745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.175957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.175984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.176191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.176235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.176499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.176542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.176841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.177 [2024-05-15 11:02:23.176889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.177 qpair failed and we were unable to recover it. 00:22:07.177 [2024-05-15 11:02:23.177137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.177164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.177404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.177448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.177690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.177733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.177945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.177972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.178206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.178249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.178515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.178558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.178797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.178844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.179060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.179087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.179329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.179371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.179613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.179657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.179868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.179895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.180118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.180146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.180410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.180454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.180699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.180742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.180928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.180964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.181229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.181273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.181516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.181559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.181798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.181840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.182052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.182079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.182353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.182398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.182671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.182715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.182937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.182964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.183151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.183177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.183442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.183484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.183756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.183799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.183993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.184021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.184287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.184330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.184581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.184625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.184866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.184893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.185082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.185108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.185350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.185392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.185661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.185704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.185950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.185977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.186196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.186222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.186459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.186504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.186749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.186792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.187055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.187099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.187342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.187384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.187665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.187694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.187955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.187981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.188196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.188225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.188445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.188474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.188693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.188735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.188921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.188955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.189163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.189190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.189400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.189445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.189658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.189707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.189948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.189975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.190193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.190237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.190447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.190491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.190693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.190737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.190927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.190961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.191201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.191246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.191485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.191529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.191777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.191820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.192057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.192102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.192344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.192386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.192650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.192693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.192903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.192936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.193171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.193200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.193492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.193537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.193812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.193841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.178 qpair failed and we were unable to recover it. 00:22:07.178 [2024-05-15 11:02:23.194071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.178 [2024-05-15 11:02:23.194098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.194364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.194407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.194647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.194690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.194896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.194921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.195140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.195166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.195395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.195424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.195648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.195692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.195898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.195924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.196112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.196138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.196368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.196410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.196693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.196737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.196993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.197021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.197258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.197301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.197534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.197577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.197788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.197830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.198020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.198047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.198281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.198324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.198532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.198575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.198852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.198881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.199122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.199150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.199373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.199416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.199651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.199694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.199906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.199938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.200172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.200216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.200426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.200474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.200680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.200723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.200941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.200969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.201157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.201183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.201425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.201468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.201680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.201709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.201941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.201968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.202172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.202197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.202406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.202449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.202689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.202730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.202944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.202971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.203157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.203184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.203449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.203493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.203743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.203787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.204073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.204100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.204366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.204408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.204642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.204685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.204900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.204926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.205130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.205156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.205403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.205447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.205657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.205686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.205888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.205915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.206147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.206175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.206403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.206430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.206665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.206707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.206887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.206914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.207161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.207204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.207445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.207488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.207721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.207764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.207997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.208024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.208290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.208334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.208550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.208580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.208847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.208874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.209103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.209148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.209366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.209409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.209649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.179 [2024-05-15 11:02:23.209691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.179 qpair failed and we were unable to recover it. 00:22:07.179 [2024-05-15 11:02:23.209924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.209958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.210141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.210168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.210374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.210417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.210697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.210741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.210958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.210989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.211199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.211243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.211486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.211529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.211789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.211831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.212043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.212071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.212291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.212334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.212565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.212609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.212816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.212843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.213037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.213065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.213331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.213375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.213648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.213691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.213901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.213927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.214211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.214256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.214530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.214573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.214822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.214865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.215104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.215130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.215352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.215396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.215637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.215681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.215862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.215888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.216163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.216211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.216459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.216504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.216761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.216803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.217019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.217062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.217336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.217380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.217603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.217630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.217809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.217836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.218036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.218080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.218360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.218409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.218651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.218694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.218936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.218963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.219224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.219272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.219511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.219555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.219735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.219762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.219983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.220011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.220243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.220286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.220505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.220547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.220754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.220780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.221021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.221064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.221329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.221373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.221583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.221614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.221844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.221869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.222101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.222145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.222345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.222388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.222619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.222662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.222843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.222870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.223106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.223150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.180 [2024-05-15 11:02:23.223365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.180 [2024-05-15 11:02:23.223408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.180 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.223619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.223662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.223873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.223900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.224148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.224193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.224471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.224515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.224726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.224770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.224988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.225016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.225225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.225269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.225517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.225561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.225762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.225805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.226042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.226087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.226328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.226370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.226593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.226634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.226820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.226846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.227085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.227129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.227347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.227388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.227644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.227687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.227894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.227919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.228140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.228168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.228404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.228448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.228715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.228758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.228976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.229007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.229210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.229253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.229504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.229531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.229769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.229796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.230032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.230076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.230311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.230354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.230592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.230635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.230849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.230875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.231089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.231133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.231335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.231378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.231584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.231627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.231807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.231833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.232057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.232101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.232343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.232386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.232670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.232713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.232925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.232961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.233200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.233225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.233469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.233512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.233749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.233793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.234028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.234055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.234295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.234337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.234548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.234591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.234800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.234825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.235008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.235034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.235273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.235315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.235515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.235559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.235799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.235841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.236087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.236133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.236362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.236405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.236647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.236690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.236874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.236900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.237146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.237191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.237461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.237504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.237715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.237759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.237946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.237973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.238208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.238250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.238502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.238545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.181 [2024-05-15 11:02:23.238727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.181 [2024-05-15 11:02:23.238755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.181 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.238991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.239018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.239223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.239267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.239492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.239539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.239739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.239781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.240098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.240142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.240325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.240353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.240598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.240643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.240862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.240887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.241096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.241141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.241389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.241433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.241669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.241713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.241938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.241965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.242177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.242203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.242454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.242498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.242725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.242753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.242968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.242996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.243226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.243272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.243499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.243541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.243743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.243787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.244018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.244063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.244332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.244376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.244620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.244663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.244876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.244902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.245156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.245183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.245448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.245491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.245698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.245743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.245979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.246006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.246186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.246214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.246422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.246466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.246714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.246757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.247006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.247033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.247244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.247287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.247565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.247612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.247801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.247827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.248046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.248072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.248306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.248347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.248559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.248602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.248789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.248817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.249059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.249103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.249355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.249398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.249641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.249684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.249900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.249926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.250176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.250225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.250495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.250539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.250784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.250826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.251067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.251111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.251349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.251393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.251620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.251662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.251883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.251910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.252154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.252180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.252389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.252432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.252709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.252757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.252972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.252999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.253219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.253262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.253472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.253515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.253751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.253795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.253986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.254014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.182 [2024-05-15 11:02:23.254232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.182 [2024-05-15 11:02:23.254275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.182 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.254543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.254586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.254823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.254849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.255076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.255103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.255320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.255363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.255637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.255683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.255941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.255968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.256209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.256235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.256452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.256496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.256764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.256807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.257044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.257071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.257292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.257334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.257583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.257626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.257804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.257829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.258046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.258089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.258335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.258380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.258597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.258624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.258861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.258887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.259109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.259153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.259366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.259416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.259682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.259726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.259914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.259950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.260175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.260219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.260466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.260510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.260749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.260792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.261027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.261075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.261278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.261321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.261553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.261598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.261785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.261810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.261995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.262023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.262252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.262296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.262525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.262568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.262783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.262810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.263045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.263089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.263334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.263378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.263615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.263659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.263894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.263919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.264124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.264152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.264371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.264399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.264648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.264691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.264907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.264943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.265182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.265208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.265446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.265489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.265706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.265747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.265927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.265962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.266237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.266286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.266561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.266605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.266869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.266911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.267192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.267237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.267507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.267552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.267828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.267854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.183 [2024-05-15 11:02:23.268067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.183 [2024-05-15 11:02:23.268097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.183 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.268388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.268434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.268726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.268769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.268972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.268998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.269250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.269294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.269694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.269737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.269948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.269975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.270185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.270211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.270429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.270472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.270669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.270712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.270955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.270982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.271199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.271225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.271491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.271534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.271802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.271845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.272087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.272119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.272405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.272448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.272688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.272731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.272937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.272964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.273178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.273204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.273408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.273451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.273685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.273728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.273962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.273989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.274167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.274193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.274457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.274499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.274743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.274786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.275004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.275031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.275296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.275339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.275688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.275731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.275980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.276008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.276281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.276324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.276567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.276611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.276860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.276886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.277163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.277206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.277478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.277521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.277739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.277781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.278034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.278079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.278330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.278373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.278642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.278686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.278895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.278921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.279136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.279162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.279399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.279442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.279673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.279701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.279893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.279920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.280184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.280228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.280469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.280512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.280779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.280821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.281084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.281129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.281402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.281446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.281708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.281751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.281941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.281967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.282216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.282242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.282517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.282560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.282746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.282773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.283001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.283046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.283283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.283330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.283570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.283614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.283844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.283870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.284086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.184 [2024-05-15 11:02:23.284129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.184 qpair failed and we were unable to recover it. 00:22:07.184 [2024-05-15 11:02:23.284371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.284414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.284678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.284720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.284955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.284982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.285190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.285235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.285459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.285486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.285698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.285724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.285907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.285940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.286185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.286233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.286470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.286513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.286737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.286765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.286977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.287004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.287184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.287211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.287428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.287471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.287748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.287794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.288002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.288048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.288263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.288307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.288536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.288579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.288778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.288804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.289028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.289073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.289314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.289358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.289621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.289664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.289902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.289927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.290146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.290172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.290418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.290460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.290696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.290739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.290945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.290972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.291203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.291246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.291486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.291528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.291769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.291813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.292050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.292094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.292336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.292380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.292610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.292652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.292894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.292920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.293172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.293217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.293466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.293493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.293740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.293783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.294029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.294078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.294315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.294358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.294630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.294658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.294916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.294950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.295161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.295187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.295393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.295436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.295675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.295718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.295943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.295970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.296232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.296276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.296463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.296490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.296756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.296800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.297021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.297048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.297294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.297336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.297548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.297592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.297806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.297832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.298010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.298036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.298246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.298288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.298527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.298571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.298802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.298843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.299108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.299152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.299367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.299409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.299641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.299685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.299867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.299893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.300097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.300123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.185 [2024-05-15 11:02:23.300340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.185 [2024-05-15 11:02:23.300384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.185 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.300635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.300678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.300862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.300887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.301091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.301118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.301323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.301367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.301588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.301631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.301817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.301844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.302072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.302117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.302328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.302373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.302601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.302645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.302852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.302878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.303085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.303130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.303337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.303381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.303583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.303626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.303832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.303858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.304068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.304112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.304352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.304401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.304611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.304655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.304842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.304869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.305086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.305131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.305337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.305381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.305624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.305666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.305879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.305904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.306155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.306200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.306441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.306484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.306689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.306734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.306915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.306950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.307155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.307183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.307420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.307463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.307674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.307718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.307944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.307972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.308160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.308187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.308434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.308474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.308717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.308761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.309034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.309061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.309271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.309314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.309548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.309591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.309862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.309907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.310157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.310183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.310388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.310432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.310665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.310708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.310919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.310954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.311141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.311168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.311389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.311432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.311701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.311745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.311938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.311966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.312204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.312230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.312466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.312510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.312738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.312782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.186 [2024-05-15 11:02:23.313058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.186 [2024-05-15 11:02:23.313085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.186 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.313318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.313362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.313634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.313680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.313904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.313936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.314166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.314191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.314430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.314472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.314783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.314825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.315071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.315102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.315324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.315366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.315612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.315655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.315879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.315905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.316125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.316151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.316443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.316487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.316783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.316825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.317066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.317094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.317336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.317379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.317621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.317664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.317873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.317900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.318140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.318184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.318532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.318590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.318839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.318882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.319251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.319296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.319501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.319543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.319819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.319848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.320080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.320106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.320379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.320424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.320647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.320690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.320992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.321019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.321330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.321374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.321583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.321624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.321839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.321864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.322075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.322101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.322312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.322355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.322570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.322614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.322853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.322879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.323056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.323083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.323321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.323365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.323646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.323690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.323879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.323905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.324091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.324118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.324352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.324396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.324675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.324705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.324963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.324990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.325236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.325280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.325497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.325541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.325756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.325801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.326040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.326085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.326355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.326403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.326675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.326719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.326978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.327005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.327218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.327248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.327571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.327612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.327834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.327861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.328076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.328103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.328315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.328359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.328604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.328647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.328833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.328859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.329072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.329099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.187 [2024-05-15 11:02:23.329305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.187 [2024-05-15 11:02:23.329348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.187 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.329585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.329627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.329841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.329881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.330205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.330236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.330499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.330542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.330813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.330857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.331154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.331180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.331406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.331448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.331691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.331734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.331936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.331978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.332205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.332232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.332448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.332492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.332764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.332807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.333017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.333044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.333259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.333288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.333541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.333584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.333918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.333968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.334245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.334289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.334537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.334581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.334864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.334891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.335180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.335224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.335478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.335521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.335761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.335805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.335996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.336023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.336265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.336308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.336514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.336558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.336769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.336794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.337003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.337047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.337291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.337334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.337611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.337663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.337957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.337984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.338190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.338219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.338435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.338478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.338693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.338719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.338899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.338926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.339187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.339230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.339497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.339524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.339828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.339855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.340137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.340180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.340429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.340473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.340692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.340735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.340972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.340999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.341269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.341313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.341591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.341633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.341829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.341853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.342105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.342132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.342400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.342443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.342739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.342782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.343063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.343090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.343330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.343373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.343642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.343685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.343894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.343920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.344149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.344175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.344375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.344418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.344634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.344677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.344867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.344893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.345130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.345157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.345396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.345440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.345781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.345824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.346089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.346117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.346367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.346410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.188 [2024-05-15 11:02:23.346668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.188 [2024-05-15 11:02:23.346710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.188 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.346950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.346976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.347224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.347249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.347531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.347574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.347775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.347816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.348031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.348058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.348289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.348333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.348574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.348619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.348837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.348866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.349066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.349092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.349342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.349386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.349631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.349674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.349890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.349915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.350167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.350217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.350427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.350469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.350676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.350703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.350884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.350925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.351123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.351150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.351392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.351435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.351678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.351720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.352083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.352111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.352335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.352377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.352801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.352855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.353077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.353104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.353354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.353398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.353632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.353676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.354014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.354040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.354321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.354364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.354617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.354659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.354861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.354886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.355124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.355151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.355383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.355426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.355710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.355753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.355997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.356024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.356243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.356286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.356530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.356573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.356817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.356860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.357038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.357065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.357284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.357329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.357573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.357615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.357852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.357878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.358089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.358116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.358364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.358407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.358608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.358652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.358895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.358921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.359176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.359221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.359471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.359498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.359768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.359811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.360042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.360088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.360305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.360348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.360596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.360640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.360841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.189 [2024-05-15 11:02:23.360867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.189 qpair failed and we were unable to recover it. 00:22:07.189 [2024-05-15 11:02:23.361059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.361086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.361329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.361372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.361551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.361578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.361787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.361813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.362020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.362050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.362295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.362338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.362559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.362603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.362812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.362838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.363050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.363095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.363336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.363380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.363623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.363666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.363876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.363901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.364126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.364171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.364390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.364433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.364673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.364717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.364937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.364965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.365176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.365203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.365436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.365478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.365724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.365768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.365979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.366006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.366215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.366240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.366440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.366483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.366695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.366739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.366926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.366964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.367205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.367231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.367478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.367505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.367714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.367758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.367977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.368004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.368213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.368241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.368485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.368527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.368762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.368805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.369004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.369031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.369263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.369305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.369577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.369620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.369896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.369923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.370142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.370168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.370476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.370505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.370743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.370785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.371074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.371120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.371359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.371403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.371680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.371724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.371907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.371939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.372149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.372175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.372397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.372440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.372664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.372707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.372883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.372907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.373135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.373162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.373392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.373423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.373675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.373717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.373939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.373965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.374190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.374216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.374425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.374466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.374747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.374790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.375045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.375072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.375313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.375356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.375718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.375767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.376041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.376068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.376274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.376301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.376601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.376628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.376870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.376895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.190 qpair failed and we were unable to recover it. 00:22:07.190 [2024-05-15 11:02:23.377151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.190 [2024-05-15 11:02:23.377178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.377421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.377463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.377733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.377776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.378059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.378090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.378342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.378385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.378664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.378707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.378920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.378969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.379183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.379208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.379487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.379530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.379781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.379824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.380092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.380119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.380393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.380436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.380704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.380747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.380963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.380989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.381208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.381250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.381513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.381556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.381776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.381818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.382028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.382054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.382333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.382376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.382626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.382669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.382862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.382886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.383118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.383145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.383377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.383420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.383685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.383730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.383943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.383970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.384205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.384232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.384452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.384495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.384775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.384818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.385011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.385053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.385294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.385339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.385559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.385601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.385814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.385841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.386048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.386092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.386313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.386357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.386582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.386610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.386825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.386852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.387090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.387135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.387369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.387412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.387643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.387686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.387900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.387926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.388143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.388186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.388428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.388473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.388741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.388785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.389007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.389058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.389265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.389308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.389554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.389596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.389798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.389823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.390096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.390140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.390390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.390432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.390683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.390726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.390968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.390995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.391216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.391260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.391531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.391574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.391892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.391918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.392131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.392161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.392397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.392440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.392678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.392721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.392988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.393015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.394138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.394169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.191 qpair failed and we were unable to recover it. 00:22:07.191 [2024-05-15 11:02:23.394431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.191 [2024-05-15 11:02:23.394475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.192 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.394727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.394771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.394994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.395021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.395263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.395307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.395506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.395549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.395738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.395765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.395950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.395979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.396311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.396358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.396659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.396703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.396912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.396955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.397161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.397205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.397448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.397492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.397765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.397809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.398059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.398087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.398271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.398298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.398541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.398585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.398803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.398829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.399080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.399128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.399376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.399406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.399646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.399677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.399937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.399966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.400156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.400184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.400429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.400458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.400750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.400777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.400993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.401025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.401241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.401267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.401487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.401513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.401799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.401853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.402106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.402133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.402384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.465 [2024-05-15 11:02:23.402412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.465 qpair failed and we were unable to recover it. 00:22:07.465 [2024-05-15 11:02:23.402695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.402747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.403023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.403051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.403242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.403268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.403587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.403643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.403906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.403955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.404191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.404237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.404588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.404638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.404870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.404899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.405180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.405240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.405508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.405539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.405768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.405798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.406074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.406102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.406340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.406365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.406621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.406650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.406883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.406913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.407123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.407149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.407380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.407410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.407769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.407828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.408091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.408118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.408337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.408367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.408698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.408752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.409013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.409045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.409264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.409290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.409491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.409520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.409790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.409840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.410075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.410103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.410352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.410379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.410587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.410617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.410874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.410903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.411181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.411237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.411488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.411518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.411756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.411784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.412006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.412033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.412246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.412272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.412508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.412537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.412741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.412773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.412999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.413027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.413240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.413269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.466 [2024-05-15 11:02:23.413544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.466 [2024-05-15 11:02:23.413574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.466 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.413849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.413878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.414131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.414158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.414363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.414391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.414714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.414762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.415012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.415039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.415262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.415292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.415599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.415650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.415865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.415894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.416128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.416154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.416371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.416404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.416646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.416675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.416877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.416906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.417143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.417171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.417415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.417443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.417674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.417706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.418017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.418043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.418255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.418284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.418518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.418547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.418803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.418832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.419069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.419096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.419324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.419352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.419578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.419604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.419842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.419872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.420111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.420137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.420356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.420383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.420608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.420637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.420870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.420901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.421116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.421143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.421403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.421429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.421666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.421712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.421952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.421979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.422189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.422216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.422453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.422482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.422716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.422744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.422963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.423006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.423193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.423234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.423466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.423498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.423770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.467 [2024-05-15 11:02:23.423798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.467 qpair failed and we were unable to recover it. 00:22:07.467 [2024-05-15 11:02:23.424049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.424076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.424288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.424315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.424514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.424539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.424845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.424895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.425149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.425176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.425415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.425444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.425720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.425765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.426069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.426095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.426282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.426327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.426589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.426641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.426851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.426877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.427069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.427097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.427283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.427309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.427488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.427513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.427698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.427723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.427954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.427980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.428159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.428185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.428408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.428439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.428713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.428739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.428918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.428956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.429146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.429172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.429410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.429439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.429646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.429672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.429861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.429888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.430156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.430182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.430392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.430417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.430659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.430688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.430921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.430958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.431172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.431198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.431486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.431515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.431754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.431783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.432051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.432078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.432268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.432293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.432557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.432585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.432782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.432808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.433024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.433050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.433257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.433285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.433511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.433536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.433781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.468 [2024-05-15 11:02:23.433810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.468 qpair failed and we were unable to recover it. 00:22:07.468 [2024-05-15 11:02:23.434079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.434109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.434313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.434338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.434530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.434555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.434741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.434767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.434945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.434980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.435162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.435187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.435369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.435410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.435637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.435662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.435869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.435912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.436180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.436209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.436423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.436450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.436716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.436744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.436973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.437003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.437216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.437242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.437484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.437513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.437706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.437734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.437945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.437971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.438182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.438208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.438389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.438416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.438604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.438629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.438865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.438893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.439144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.439170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.439377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.439402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.439681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.439723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.439955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.439981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.440188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.440214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.440420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.440449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.440655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.440688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.440946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.440973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.441199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.441227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.469 [2024-05-15 11:02:23.441469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.469 [2024-05-15 11:02:23.441494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.469 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.441698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.441724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.441959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.442003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.442241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.442269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.442526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.442552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.442787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.442816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.443022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.443050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.443286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.443311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.443579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.443607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.444045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.444074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.444327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.444353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.444566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.444594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.444796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.444824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.445058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.445084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.445329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.445355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.445587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.445615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.445852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.445877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.446148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.446177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.446426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.446454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.446679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.446705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.446896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.446921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.447137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.447165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.447363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.447388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.447626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.447656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.447882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.447912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.448138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.448164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.448377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.448405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.448635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.448663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.448977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.449003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.449226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.449254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.449457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.449485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.449713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.449738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.449983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.450012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.450277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.450305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.450516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.450542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.450774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.450803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.451048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.451077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.451281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.451306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.451523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.470 [2024-05-15 11:02:23.451565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.470 qpair failed and we were unable to recover it. 00:22:07.470 [2024-05-15 11:02:23.451761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.451789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.451992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.452018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.452229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.452257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.452480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.452508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.452742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.452768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.452984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.453010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.453244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.453272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.453532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.453558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.453775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.453800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.454022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.454051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.454288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.454314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.454520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.454545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.454721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.454753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.454945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.454978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.455248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.455276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.455484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.455512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.455742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.455767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.455979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.456008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.456243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.456272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.456536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.456562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.456810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.456838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.457102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.457131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.457365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.457391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.457629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.457657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.457885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.457914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.458161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.458187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.458423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.458453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.458717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.458744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.458980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.459007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.459199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.459224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.459508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.459534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.459711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.459736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.459986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.460013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.460228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.460253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.460456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.460481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.460708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.460741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.460956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.460995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.461222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.461248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.461453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.471 [2024-05-15 11:02:23.461479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.471 qpair failed and we were unable to recover it. 00:22:07.471 [2024-05-15 11:02:23.461679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.461704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.461954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.461981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.462227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.462256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.462504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.462532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.462726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.462751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.462977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.463007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.463216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.463244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.463473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.463498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.463704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.463731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.463996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.464025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.464291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.464317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.464561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.464588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.464818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.464846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.465058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.465085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.465331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.465360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.465564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.465590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.465806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.465832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.466049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.466078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.466350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.466375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.466613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.466639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.466846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.466874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.467113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.467139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.467319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.467344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.467578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.467606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.467872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.467901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.468114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.468140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.468338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.468366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.468604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.468631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.468843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.468870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.469096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.469126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.469364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.469393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.469625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.469651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.469887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.469915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.470150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.470178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.470390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.470417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.470632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.470657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.470848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.470873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.471090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.471117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.472 [2024-05-15 11:02:23.471360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.472 [2024-05-15 11:02:23.471388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.472 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.471650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.471675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.471875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.471900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.472149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.472183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.472389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.472418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.472634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.472659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.472896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.472925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.473185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.473215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.473447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.473474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.473719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.473745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.473926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.473959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.474144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.474169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.474353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.474378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.474617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.474645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.475018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.475044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.475278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.475305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.475532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.475558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.475777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.475803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.476014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.476043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.476271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.476299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.476541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.476566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.476751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.476776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.476977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.477005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.477235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.477260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.477453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.477479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.477750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.477778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.478048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.478074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.478308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.478337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.478528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.478557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.478791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.478818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.479023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.479061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.479325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.479354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.479606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.479631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.479884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.479912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.480160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.480186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.480417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.480442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.480696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.480724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.480962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.480995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.481229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.481254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.481496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.473 [2024-05-15 11:02:23.481524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.473 qpair failed and we were unable to recover it. 00:22:07.473 [2024-05-15 11:02:23.481754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.481784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.482052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.482079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.482344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.482371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.482572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.482601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.482846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.482872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.483119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.483148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.483374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.483402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.483636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.483661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.483894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.483923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.484169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.484195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.484446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.484471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.484703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.484732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.484982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.485012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.485250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.485275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.485491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.485516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.485776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.485804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.486044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.486070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.486346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.486374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.486585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.486614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.486823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.486848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.487064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.487089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.487343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.487372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.487574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.487600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.487826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.487854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.488059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.488088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.488323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.488348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.488616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.488644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.488873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.488899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.489143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.489169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.489437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.489465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.489676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.489705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.474 [2024-05-15 11:02:23.489919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.474 [2024-05-15 11:02:23.489950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.474 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.490170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.490196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.490401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.490431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.490667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.490693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.490926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.490962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.491188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.491217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.491453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.491477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.491711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.491739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.491948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.491983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.492186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.492220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.492487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.492515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.492741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.492770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.493002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.493028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.493247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.493273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.493552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.493581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.493879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.493908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.494145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.494171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.494384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.494412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.494650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.494675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.494913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.494943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.495205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.495233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.495466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.495494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.495730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.495758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.496019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.496049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.496279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.496304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.496565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.496593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.496823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.496848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.497028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.497058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.497265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.497291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.497526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.497554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.497784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.497809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.498071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.498100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.498301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.498329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.498544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.498569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.498824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.498852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.499110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.499139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.499396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.499422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.499685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.499713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.499920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.499956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.475 [2024-05-15 11:02:23.500157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.475 [2024-05-15 11:02:23.500183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.475 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.500432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.500459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.500675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.500701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.500957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.501001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.501184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.501210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.501383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.501409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.501623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.501648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.501902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.501927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.502168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.502196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.502453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.502479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.502688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.502716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.502975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.503004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.503236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.503261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.503497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.503525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.503736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.503764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.504001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.504031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.504215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.504241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.504464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.504493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.504725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.504751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.504987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.505016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.505247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.505277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.505535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.505561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.505803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.505829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.506043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.506071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.506286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.506311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.506523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.506552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.506764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.506793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.507054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.507081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.507289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.507317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.507550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.507578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.507787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.507812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.507995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.508022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.508292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.508320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.508549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.508575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.508815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.508844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.509077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.509103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.509342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.509368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.509604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.509632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.509831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.509860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.476 [2024-05-15 11:02:23.510096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.476 [2024-05-15 11:02:23.510123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.476 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.510322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.510351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.510583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.510612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.510836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.510865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.511075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.511105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.511308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.511336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.511543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.511568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.511783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.511808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.512047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.512075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.512308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.512334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.512566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.512593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.512793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.512822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.513047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.513074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.513323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.513352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.513588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.513616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.513822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.513847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.514083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.514113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.514314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.514342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.514550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.514575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.514806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.514836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.515104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.515134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.515339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.515365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.515560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.515589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.515791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.515820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.516023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.516049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.516258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.516286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.516528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.516555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.516798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.516824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.517090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.517120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.517350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.517378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.517630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.517655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.517886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.517915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.518179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.518208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.518448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.518473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.518685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.518711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.518949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.518983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.519188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.519214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.519487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.519515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.519722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.519750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.519990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.477 [2024-05-15 11:02:23.520017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.477 qpair failed and we were unable to recover it. 00:22:07.477 [2024-05-15 11:02:23.520278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.520306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.520539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.520568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.520804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.520830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.521070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.521099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.521330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.521359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.521595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.521621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.521865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.521893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.522155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.522182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.522360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.522385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.522633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.522658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.522888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.522916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.523170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.523195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.523405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.523430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.523688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.523716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.523954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.523980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.524187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.524229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.524410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.524436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.524676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.524702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.524959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.524986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.525233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.525261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.525472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.525497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.525709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.525752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.525991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.526017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.526205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.526230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.526495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.526524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.526754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.526780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.527013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.527039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.527289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.527317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.527549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.527579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.527817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.527843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.528030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.528056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.528268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.528301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.478 qpair failed and we were unable to recover it. 00:22:07.478 [2024-05-15 11:02:23.528531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.478 [2024-05-15 11:02:23.528557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.528788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.528816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.529027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.529059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.529271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.529296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.529503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.529528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.529733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.529758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.529946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.529972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.530210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.530237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.530498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.530526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.530741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.530766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.530980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.531008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.531271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.531298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.531498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.531522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.531796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.531822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.532056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.532086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.532289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.532314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.532502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.532527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.532704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.532728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.533009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.533035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.533238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.533267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.533535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.533563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.533773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.533800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.534038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.534068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.534306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.534332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.534570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.534596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.534857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.534886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.535129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.535160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.535393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.535420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.535676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.535702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.535939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.535968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.536194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.536220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.536486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.536514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.536755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.536781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.536964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.536990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.537183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.537209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.537420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.537462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.537708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.537734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.537978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.538004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.538215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.538240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.479 qpair failed and we were unable to recover it. 00:22:07.479 [2024-05-15 11:02:23.538419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.479 [2024-05-15 11:02:23.538444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.538638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.538663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.538898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.538923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.539134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.539160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.539393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.539422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.539681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.539709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.540017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.540044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.540255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.540281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.540472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.540497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.540831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.540895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.541126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.541153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.541377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.541406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.541662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.541691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.541912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.541951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.542162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.542188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.542440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.542466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.542694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.542723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.542922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.542972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.543161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.543187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.543371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.543412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.543613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.543642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.543877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.543903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.546904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.546949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.547193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.547220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.547433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.547459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.547644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.547685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.547917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.547951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.548200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.548234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.548470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.548496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.548683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.548709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.548927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.548965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.549157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.549183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.549376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.549402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.549601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.549628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.549837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.549864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.550087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.550112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.550358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.550387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.550641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.550668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.550894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.480 [2024-05-15 11:02:23.550924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.480 qpair failed and we were unable to recover it. 00:22:07.480 [2024-05-15 11:02:23.551164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.551190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.551482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.551508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.554589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.554634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.554863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.554891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.555137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.555163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.555415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.555444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.555706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.555734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.556022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.556049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.556251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.556277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.556489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.556514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.556729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.556759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.557006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.557033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.557217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.557244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.557428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.557453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.557721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.557749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.557999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.558025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.558204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.558233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.558415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.558441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.558674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.558702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.558910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.558947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.559131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.559156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.559404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.559430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.560576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.560610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.560883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.560916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.561143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.561169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.561410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.561435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.561640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.561680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.561890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.561919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.562148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.562174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.562366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.562403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.562609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.562650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.562885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.562913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.563110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.563137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.563351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.563378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.563783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.563844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.564064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.564091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.564294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.564322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.564604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.481 [2024-05-15 11:02:23.564653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.481 qpair failed and we were unable to recover it. 00:22:07.481 [2024-05-15 11:02:23.564914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.564948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.565223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.565251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.565489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.565515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.566326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.566355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.566609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.566638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.566864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.566897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.567120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.567147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.567362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.567388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.567617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.567648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.567863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.567889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.568096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.568122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.568369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.568406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.569145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.569175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.569426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.569455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.569750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.569779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.570061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.570088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.570308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.570339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.570623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.570665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.570898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.570934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.571150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.571176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.571387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.571429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.571666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.571695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.571894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.571922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.572150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.572176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.572412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.572441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.572665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.572696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.572939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.572968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.573150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.573175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.573383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.573411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.573669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.573706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.574001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.574026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.574205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.574231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.574426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.574456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.574648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.574674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.574926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.574970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.575177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.575218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.575446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.575491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.575678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.575706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.575920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.575955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.482 qpair failed and we were unable to recover it. 00:22:07.482 [2024-05-15 11:02:23.576147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.482 [2024-05-15 11:02:23.576173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.576379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.576405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.576680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.576706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.576917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.576953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.577144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.577171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.577455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.577481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.577731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.577777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.578067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.578095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.578271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.578296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.578516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.578543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.578754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.578780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.578993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.579020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.579202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.579228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.579431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.579475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.579712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.579738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.579934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.579961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.580143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.580170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.580384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.580409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.580659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.580702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.580963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.580990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.581186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.581217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.581426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.581470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.581655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.581683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.581921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.581953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.582144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.582170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.582380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.582405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.582592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.582618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.582829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.582854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.583046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.583073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.583335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.583377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.583619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.583663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.583854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.583881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.483 [2024-05-15 11:02:23.584109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.483 [2024-05-15 11:02:23.584137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.483 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.584347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.584373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.584594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.584620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.584836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.584863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.585050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.585077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.585280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.585324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.585529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.585558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.585772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.585798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.586054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.586081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.586294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.586324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.586608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.586653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.586889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.586916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.587106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.587132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.587350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.587377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.587591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.587635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.587882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.587908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.588140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.588167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.588355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.588382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.588612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.588655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.588872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.588898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.589089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.589116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.589362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.589406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.589650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.589678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.589919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.589952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.590142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.590168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.590384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.590414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.590641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.590684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.590905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.590940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.591133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.591164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.591405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.591434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.591708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.591751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.591944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.591971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.592153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.592179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.592416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.592459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.592699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.592728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.592961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.592988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.593823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.593854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.594078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.594106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.594326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.484 [2024-05-15 11:02:23.594369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.484 qpair failed and we were unable to recover it. 00:22:07.484 [2024-05-15 11:02:23.594651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.594695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.594909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.594941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.595133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.595159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.595430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.595478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.595709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.595738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.595977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.596003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.596184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.596209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.596422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.596448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.596676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.596720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.596971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.596999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.597210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.597252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.597459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.597485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.597712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.597738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.597959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.597985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.598244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.598288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.598534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.598580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.598835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.598880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.599079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.599106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.599360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.599404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.599653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.599697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.599903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.599936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.600125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.600151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.600369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.600413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.600652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.600695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.600934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.600960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.601148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.601175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.601406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.601447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.601722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.601768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.601992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.602019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.602207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.602244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.602497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.602542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.602768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.602811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.602997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.603033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.603221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.603248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.603470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.603514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.603775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.603817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.604058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.604084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.604331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.604375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.605091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.605129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.605340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.605384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.485 qpair failed and we were unable to recover it. 00:22:07.485 [2024-05-15 11:02:23.605615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.485 [2024-05-15 11:02:23.605663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.605874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.605900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.606103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.606131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.606361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.606405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.606635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.606679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.606903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.606938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.607134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.607160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.607355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.607381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.607608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.607653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.607863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.607889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.608105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.608132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.608319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.608346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.608561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.608587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.608796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.608821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.609039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.609066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.609313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.609339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.609615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.609659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.609847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.609873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.610075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.610101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.610288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.610314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.610524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.610550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.610789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.610832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.611046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.611074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.611258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.611284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.611489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.611533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.611830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.611857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.612064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.612091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.612280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.612306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.612510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.612536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.612809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.612839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.613036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.613064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.613277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.613303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.613518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.613547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.613771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.613797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.614011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.614038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.614253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.614281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.614506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.614550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.614737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.614773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.614994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.615020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.615201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.615227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.615439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.615465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.615670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.486 [2024-05-15 11:02:23.615696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.486 qpair failed and we were unable to recover it. 00:22:07.486 [2024-05-15 11:02:23.615903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.615939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.616136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.616162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.616369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.616396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.616611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.616637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.616848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.616874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.617089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.617117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.617308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.617336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.617606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.617653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.617864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.617890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.618097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.618124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.618328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.618371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.618638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.618682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.618962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.618989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.619198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.619241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.619505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.619550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.619800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.619844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.620103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.620131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.620323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.620348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.620567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.620611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.620843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.620870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.621094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.621120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.621309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.621336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.621549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.621590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.621816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.621844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.622042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.622069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.622276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.622327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.622561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.622592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.622825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.622859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.623077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.623106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.623326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.623352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.623616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.623642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.623834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.623859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.624097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.624123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.624344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.624388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.624620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.624663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.624900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.624926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.625117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.625144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.625393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.487 [2024-05-15 11:02:23.625436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.487 qpair failed and we were unable to recover it. 00:22:07.487 [2024-05-15 11:02:23.625660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.625703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.625909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.625939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.626151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.626177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.626503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.626547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.626800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.626842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.627062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.627089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.627302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.627345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.627606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.627648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.627870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.627897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.628144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.628188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.628454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.628496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.628758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.628803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.629041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.629084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.629304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.629348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.629564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.629606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.629845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.629871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.630090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.630135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.630382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.630424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.630669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.630712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.630922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.630956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.631163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.631189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.631405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.631430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.631642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.631669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.631873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.631899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.632113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.632157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.632397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.632440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.632709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.632755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.632949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.632976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.633191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.633217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.633447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.633494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.633730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.633775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.634016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.634060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.634301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.634344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.634611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.634654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.634850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.634875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.635161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.635209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.635448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.635491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.635705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.635747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.635958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.635986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.636210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.636253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.636497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.636523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.636705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.636731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.636941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.636969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.637192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.637236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.637447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.637490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.488 qpair failed and we were unable to recover it. 00:22:07.488 [2024-05-15 11:02:23.637732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.488 [2024-05-15 11:02:23.637775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.637965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.637992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.638240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.638266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.638504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.638546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.638787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.638829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.639025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.639050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.639254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.639297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.639665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.639711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.640046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.640072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.640320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.640363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.640595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.640638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.640842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.640868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.641094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.641121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.641330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.641374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.641656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.641702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.641926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.641962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.642144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.642170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.642363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.642388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.642662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.642705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.642952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.642987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.643183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.643209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.643461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.643503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.643858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.643905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.644124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.644150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.644373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.644420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.644680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.644725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.644981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.645007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.645272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.645318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.645609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.645652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.645875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.645901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.646162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.646206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.646504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.646530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.646846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.646888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.647091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.647118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.647361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.647404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.647646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.647689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.647916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.647947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.648137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.648163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.648460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.648504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.648790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.648816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.649076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.649103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.649345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.649372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.649627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.649671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.649885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.649911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.650158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.650185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.489 [2024-05-15 11:02:23.650461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.489 [2024-05-15 11:02:23.650505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.489 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.650744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.650787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.651014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.651040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.651275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.651318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.651557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.651599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.651805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.651830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.652064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.652091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.652323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.652366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.652589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.652632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.652847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.652874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.653104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.653149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.653432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.653475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.653783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.653811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.654112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.654156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.654379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.654421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.654679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.654723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.654939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.654965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.655194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.655238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.655485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.655527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.655767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.655815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.656085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.656112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.656358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.656401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.656678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.656721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.656959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.656985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.657261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.657306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.657519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.657546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.657728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.657755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.657962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.657989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.658239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.658267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.658594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.658642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.658865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.658891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.663196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.663242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.663534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.663579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.663810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.663854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.664073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.664101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.664311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.664339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.664576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.664602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.664805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.664848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.665037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.665063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.665350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.665376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.665667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.665711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.665940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.665966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.666147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.666173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.666395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.666421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.490 [2024-05-15 11:02:23.666659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.490 [2024-05-15 11:02:23.666702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.490 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.666936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.666963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.667150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.667178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.667422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.667466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.667735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.667778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.668064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.668091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.668306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.668349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.668604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.668630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.668867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.668893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.669118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.669145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.669336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.669363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.669606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.669649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.669852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.669877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.670115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.670159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.670397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.670440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.670739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.670788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.671031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.671074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.671277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.671321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.671585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.671628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.671833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.671860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.672078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.672122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.672360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.672386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.672702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.672732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.672980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.673005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.673272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.673319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.673602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.673645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.673905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.673935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.674178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.674207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.674531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.674574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.674784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.674828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.675074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.675101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.675501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.675568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.675903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.675954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.676250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.676276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.676589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.676633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.676896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.676925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.677221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.677249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.677461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.677486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.677727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.491 [2024-05-15 11:02:23.677771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.491 qpair failed and we were unable to recover it. 00:22:07.491 [2024-05-15 11:02:23.678059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.678086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.678322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.678365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.678577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.678619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.678844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.678885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.679111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.679140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.679381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.679407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.679637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.679665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.679874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.679900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.680093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.680120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.680368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.680398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.680601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.680630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.680837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.680866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.681075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.681102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.681312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.681340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.681572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.681602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.681868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.681897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.682149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.682180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.492 [2024-05-15 11:02:23.682400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.492 [2024-05-15 11:02:23.682428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.492 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.682659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.682689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.682926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.682979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.683192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.683218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.683427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.683455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.683669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.683697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.683899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.683925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.684141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.684167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.684383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.684412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.684657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.684687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.684915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.684951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.685346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.685376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.685614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.685644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.685894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.685923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.686163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.686188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.686459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.686485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.686690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.686719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.686924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.686978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.687161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.687186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.687460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.687486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.687735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.687764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.688010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.688036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.688275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.688300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.688598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.688626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.688886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.688914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.769 qpair failed and we were unable to recover it. 00:22:07.769 [2024-05-15 11:02:23.689134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.769 [2024-05-15 11:02:23.689160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.689439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.689483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.689720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.689751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.689983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.690012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.690202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.690229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.690447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.690493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.690771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.690803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.691027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.691054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.691288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.691316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.691537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.691561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.691810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.691856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.692085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.692111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.692327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.692352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.692596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.692621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.692857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.692885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.693134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.693160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.693359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.693387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.693590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.693618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.693839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.693864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.694140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.694166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.694437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.694463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.694692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.694717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.695004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.695031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.695263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.695291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.695526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.695553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.695868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.695920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.696187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.696212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.696464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.696490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.696703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.696749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.697003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.697029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.697246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.697272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.697636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.697684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.697942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.697970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.698232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.698258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.698570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.698595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.698833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.698861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.699061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.699089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.699312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.699340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.699567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.770 [2024-05-15 11:02:23.699595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.770 qpair failed and we were unable to recover it. 00:22:07.770 [2024-05-15 11:02:23.699819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.699845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.700109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.700137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.700369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.700397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.700642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.700667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.700882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.700907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.701100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.701126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.701336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.701362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.701597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.701623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.701798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.701823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.701997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.702023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.702253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.702282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.702540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.702567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.702828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.702854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.703074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.703101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.703332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.703360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.703597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.703623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.703852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.703885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.704126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.704151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.704398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.704423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.704682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.704728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.704980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.705006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.705244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.705269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.705507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.705535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.705769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.705797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.706052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.706078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.706315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.706343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.706539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.706569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.706775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.706801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.707058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.707088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.707315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.707340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.707555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.707581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.707752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.707777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.708017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.708046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.708262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.708288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.708522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.708565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.708773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.708804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.709041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.709068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.709281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.709320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.709555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.771 [2024-05-15 11:02:23.709584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.771 qpair failed and we were unable to recover it. 00:22:07.771 [2024-05-15 11:02:23.709807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.709833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.710025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.710051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.710324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.710353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.710604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.710629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.710817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.710842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.711067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.711097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.711362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.711387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.711585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.711611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.711789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.711814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.712024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.712050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.712224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.712250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.712456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.712484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.712679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.712703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.712958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.712987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.713221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.713250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.713462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.713487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.713739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.713767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.713969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.713998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.714213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.714239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.714502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.714530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.714762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.714787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.715022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.715049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.715289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.715317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.715555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.715583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.715814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.715839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.716085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.716114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.716327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.716355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.716583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.716610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.716864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.716892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.717124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.717150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.717342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.717368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.717574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.717600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.717845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.717873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.718098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.718124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.718368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.718396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.718649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.718675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.718915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.718958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.719194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.719236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.772 qpair failed and we were unable to recover it. 00:22:07.772 [2024-05-15 11:02:23.719470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.772 [2024-05-15 11:02:23.719498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.719702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.719728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.719914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.719956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.720164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.720194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.720441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.720468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.720679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.720704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.720921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.720971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.721206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.721236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.721444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.721473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.721726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.721755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.721984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.722010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.722225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.722253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.722463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.722492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.722723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.722748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.722986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.723016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.723281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.723307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.723570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.723595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.723832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.723860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.724105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.724133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.724333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.724358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.724631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.724659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.724901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.724927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.725140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.725167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.725396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.725425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.725654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.725680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.725919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.725951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.726227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.726256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.726492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.726520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.726752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.726778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.726998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.727027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.773 [2024-05-15 11:02:23.727231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.773 [2024-05-15 11:02:23.727259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.773 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.727509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.727534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.727777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.727805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.728065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.728095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.728314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.728345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.728558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.728587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.728848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.728876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.729120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.729146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.729384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.729414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.729640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.729669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.729906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.729936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.730125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.730150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.730356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.730385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.730618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.730643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.730904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.730941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.731152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.731181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.731417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.731443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.731677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.731706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.731955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.731985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.732214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.732240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.732420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.732445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.732644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.732673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.732882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.732907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.733152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.733178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.733417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.733445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.733653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.733679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.733890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.733918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.734151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.734179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.734388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.734415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.734648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.734675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.734909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.734946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.735165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.735196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.735384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.735409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.735618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.735643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.735825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.735850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.736100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.736129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.736331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.736357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.736598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.736623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.736903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.736946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.774 [2024-05-15 11:02:23.737176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.774 [2024-05-15 11:02:23.737202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.774 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.737416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.737441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.737650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.737675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.737862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.737889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.738145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.738173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.738403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.738433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.738697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.738726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.738977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.739004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.739251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.739279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.739491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.739519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.739749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.739775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.739979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.740005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.740277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.740305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.740529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.740554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.740787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.740814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.741076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.741102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.741278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.741303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.741503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.741529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.741741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.741769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.742036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.742062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.742307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.742333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.742536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.742566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.742822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.742847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.743078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.743105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.743339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.743368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.743595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.743620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.743853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.743881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.744123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.744150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.744361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.744387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.744601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.744626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.744860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.744888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.745148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.745174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.745419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.745447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.745675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.745704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.745935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.745961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.746202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.746230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.746439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.746466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.746668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.746693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.746959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.746999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.775 [2024-05-15 11:02:23.747233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.775 [2024-05-15 11:02:23.747261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.775 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.747490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.747516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.747764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.747789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.748000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.748029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.748255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.748280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.748520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.748548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.748754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.748782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.749048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.749074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.749264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.749290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.749497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.749527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.749764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.749790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.750000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.750026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.750233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.750258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.750467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.750492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.750673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.750698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.750912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.750950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.751160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.751185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.751421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.751448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.751685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.751713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.751951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.751977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.752217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.752244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.752504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.752536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.752738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.752763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.753033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.753062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.753292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.753320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.753541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.753566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.753806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.753834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.754073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.754099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.754306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.754331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.754573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.754599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.754830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.754873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.755081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.755108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.755341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.755368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.755603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.755630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.755836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.755863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.756090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.756116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.756374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.756399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.756603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.756628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.756849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.756876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.776 [2024-05-15 11:02:23.757082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.776 [2024-05-15 11:02:23.757112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.776 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.757324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.757350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.757585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.757615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.757839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.757867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.758108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.758135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.758365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.758393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.758640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.758668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.758927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.758965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.759209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.759236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.759427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.759461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.759698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.759724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.759942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.759983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.760194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.760220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.760392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.760417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.760596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.760622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.760831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.760863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.761120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.761146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.761374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.761402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.761638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.761663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.761838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.761862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.762093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.762121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.762355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.762380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.762617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.762642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.762919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.762961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.763202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.763230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.763444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.763470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.763732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.763774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.764080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.764106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.764332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.764358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.764591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.764619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.764828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.764853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.765105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.765130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.765373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.765401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.765632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.765657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.765833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.765858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.766091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.766121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.766352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.766380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.766611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.766637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.766901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.766943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.777 [2024-05-15 11:02:23.767146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.777 [2024-05-15 11:02:23.767176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.777 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.767382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.767408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.767645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.767674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.767871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.767899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.768121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.768147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.768356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.768397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.768651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.768679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.768876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.768901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.769118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.769145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.769367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.769396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.769637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.769662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.769901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.769939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.770221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.770250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.770478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.770504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.770712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.770742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.771010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.771037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.771248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.771273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.771508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.771537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.771763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.771791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.772050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.772076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.772322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.772348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.772526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.772553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.772767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.772792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.773064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.773090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.773297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.773326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.773526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.773552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.773753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.773783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.774083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.774109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.774296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.774322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.774589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.774617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.774845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.774887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.775148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.778 [2024-05-15 11:02:23.775175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.778 qpair failed and we were unable to recover it. 00:22:07.778 [2024-05-15 11:02:23.775410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.775438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.775680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.775708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.775974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.776004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.776218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.776243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.776475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.776504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.776763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.776788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.777053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.777086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.777284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.777314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.777550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.777577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.777845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.777874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.778115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.778142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.778329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.778354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.778598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.778626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.778862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.778890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.779139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.779165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.779401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.779428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.779624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.779653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.779855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.779881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.780121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.780147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.780413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.780439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.780679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.780705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.780917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.780951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.781184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.781209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.781421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.781446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.781717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.781743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.781937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.781968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.782181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.782206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.782391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.782417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.782627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.782668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.782905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.782941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.783167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.783196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.783429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.783458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.783724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.783749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.783979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.784010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.784248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.784277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.784503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.784528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.784773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.784799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.785009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.785034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.785215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.779 [2024-05-15 11:02:23.785241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.779 qpair failed and we were unable to recover it. 00:22:07.779 [2024-05-15 11:02:23.785469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.785498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.785732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.785760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.786015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.786042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.786305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.786331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.786534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.786559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.786769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.786795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.787009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.787035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.787271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.787299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.787515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.787541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.787746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.787771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.787961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.787987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.788203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.788228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.788459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.788486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.788746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.788773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.788996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.789022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.789256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.789284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.789516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.789541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.789746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.789770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.790009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.790038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.790231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.790258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.790516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.790541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.790753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.790784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.790972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.790997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.791205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.791231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.791413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.791439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.791651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.791692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.791890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.791916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.792136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.792165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.792422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.792450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.792673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.792698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.792947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.792974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.793186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.793215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.793416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.793441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.793672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.793700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.793937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.793968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.794188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.794213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.794484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.794512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.794742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.794770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.780 [2024-05-15 11:02:23.794997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.780 [2024-05-15 11:02:23.795023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.780 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.795264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.795292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.795552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.795578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.795794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.795820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.796088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.796117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.796350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.796378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.796631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.796656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.796889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.796919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.797153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.797183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.797390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.797416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.797657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.797685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.797944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.797973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.798210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.798236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.798449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.798477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.798702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.798729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.798991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.799017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.799256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.799284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.799507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.799535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.799772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.799797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.800044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.800073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.800277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.800305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.800556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.800581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.800815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.800843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.801040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.801069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.801303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.801328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.801540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.801569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.801766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.801795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.802056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.802083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.802306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.802334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.802541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.802569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.802800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.802826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.803058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.803087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.803290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.803318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.803562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.803587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.803847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.803875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.804122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.804154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.804330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.804355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.804593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.804621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.804829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.804858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.805090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.781 [2024-05-15 11:02:23.805117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.781 qpair failed and we were unable to recover it. 00:22:07.781 [2024-05-15 11:02:23.805357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.805387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.805653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.805679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.805922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.805960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.806215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.806243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.806482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.806508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.806747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.806772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.807011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.807040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.807314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.807339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.807553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.807578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.807853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.807879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.808140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.808169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.808402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.808431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.808659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.808688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.808910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.808943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.809153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.809179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.809443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.809471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.809696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.809724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.809959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.809986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.810203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.810229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.810408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.810433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.810646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.810671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.810904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.810940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.811145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.811173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.811399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.811425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.811662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.811690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.811915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.811951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.812160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.812185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.812425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.812453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.812709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.812737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.812970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.812997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.813214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.813239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.813451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.813479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.813688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.813713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.813924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.813976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.814213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.814240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.814449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.814474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.814664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.814689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.782 qpair failed and we were unable to recover it. 00:22:07.782 [2024-05-15 11:02:23.814899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.782 [2024-05-15 11:02:23.814926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.815175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.815205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.815469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.815497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.815758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.815787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.816002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.816028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.816237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.816268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.816517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.816542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.816784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.816810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.817049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.817078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.817316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.817344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.817562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.817588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.817795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.817824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.818057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.818087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.818343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.818369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.818639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.818668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.818873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.818901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.819135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.819164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.819411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.819439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.819700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.819725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.819960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.819986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.820210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.820238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.820478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.820506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.820738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.820764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.820974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.821000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.821245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.821276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.821514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.821539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.821749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.821779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.822014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.822043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.822235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.822261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.822543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.822572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.822808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.822836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.823049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.823076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.823282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.823309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.823571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.823599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.823818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.783 [2024-05-15 11:02:23.823844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.783 qpair failed and we were unable to recover it. 00:22:07.783 [2024-05-15 11:02:23.824151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.824180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.824417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.824445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.824712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.824737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.824947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.824977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.825188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.825216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.825476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.825502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.825712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.825743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.825998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.826028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.826258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.826290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.826553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.826582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.826813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.826842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.827073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.827099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.827377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.827403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.827612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.827638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.827877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.827902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.828155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.828181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.828417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.828446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.828685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.828711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.828988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.829014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.829280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.829309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.829562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.829588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.829852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.829880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.830087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.830117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.830309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.830337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.830535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.830564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.830792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.830820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.831065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.831091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.831303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.831331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.831586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.831615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.831851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.831879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.832083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.832110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.832326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.832352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.832567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.832593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.832837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.832864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.833096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.833132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.833335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.833362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.833549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.833575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.833780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.833805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.834022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.784 [2024-05-15 11:02:23.834047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.784 qpair failed and we were unable to recover it. 00:22:07.784 [2024-05-15 11:02:23.834264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.834293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.834509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.834538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.834733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.834757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.834941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.834970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.835159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.835185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.835405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.835431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.835691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.835720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.835981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.836007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.836218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.836244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.836456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.836486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.836712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.836740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.836991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.837017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.837251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.837280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.837505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.837533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.837743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.837769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.838026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.838057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.838291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.838320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.838583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.838608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.838863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.838891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.839109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.839139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.839409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.839435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.839670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.839698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.839939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.839976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.840191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.840217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.840425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.840467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.840699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.840727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.840960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.840987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.841195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.841221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.841461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.841489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.841750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.841776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.842042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.842072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.842348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.842374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.842571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.842596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.842816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.842845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.843073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.843103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.843374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.843400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.843645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.843674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.843873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.843903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.844149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.785 [2024-05-15 11:02:23.844176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.785 qpair failed and we were unable to recover it. 00:22:07.785 [2024-05-15 11:02:23.844414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.844446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.844670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.844700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.844943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.844969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.845247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.845277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.845513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.845542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.845745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.845771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.846009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.846036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.846248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.846273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.846517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.846542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.846778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.846806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.847041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.847075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.847301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.847327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.847594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.847623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.847873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.847899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.848094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.848120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.848362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.848391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.848613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.848642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.848857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.848883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.849116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.849142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.849356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.849384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.849646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.849671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.849938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.849965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.850233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.850262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.850549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.850574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.850840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.850866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.851141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.851172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.851432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.851458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.851709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.851738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.851975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.852004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.852252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.852277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.852561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.852590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.852802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.852830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.853020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.853047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.853232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.853258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.853500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.853528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.853741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.853766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.854019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.854058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.854270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.854300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.854536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.854562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.786 qpair failed and we were unable to recover it. 00:22:07.786 [2024-05-15 11:02:23.854778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.786 [2024-05-15 11:02:23.854809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.855021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.855050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.855285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.855313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.855602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.855628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.855838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.855867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.856099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.856128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.856372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.856400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.856648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.856676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.856877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.856902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.857134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.857160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.857393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.857421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.857642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.857668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.857918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.857953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.858181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.858210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.858430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.858455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.858695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.858723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.858956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.858989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.859198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.859223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.859404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.859439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.859645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.859671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.859881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.859907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.860136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.860164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.860429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.860455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.860670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.860696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.860909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.860956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.861217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.861246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.861484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.861510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.861748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.861777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.862036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.862066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.862294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.862322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.862555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.862583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.862807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.862836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.787 qpair failed and we were unable to recover it. 00:22:07.787 [2024-05-15 11:02:23.863053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.787 [2024-05-15 11:02:23.863080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.863325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.863350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.863550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.863576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.863770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.863797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.864016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.864043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.864280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.864306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.864511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.864537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.864773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.864806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.865040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.865072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.865307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.865332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.865566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.865594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.865793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.865822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.866030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.866056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.866313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.866341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.866609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.866640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.866874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.866899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.867121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.867147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.867393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.867421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.867629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.867654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.867880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.867909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.868134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.868162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.868356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.868385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.868623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.868652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.868864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.868890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.869142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.869168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.869380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.869406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.869649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.869678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.869889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.869915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.870119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.870144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.870390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.870419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.870628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.870653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.870840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.870866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.871062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.871087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.871329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.871354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.871635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.871669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.871920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.871956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.872145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.872173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.872402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.872430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.788 [2024-05-15 11:02:23.872641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.788 [2024-05-15 11:02:23.872669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.788 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.872916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.872947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.873215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.873240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.873455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.873487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.873719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.873745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.873971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.874000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.874228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.874256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.874488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.874514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.874692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.874718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.874952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.874980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.875206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.875231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.875420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.875446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.875680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.875708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.875916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.875951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.876181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.876207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.876439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.876468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.876680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.876706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.876946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.876972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.877207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.877238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.877472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.877498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.877738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.877767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.877971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.878001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.878238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.878264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.878519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.878548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.878756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.878785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.878991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.879018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.879250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.879279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.879553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.879584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.879824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.879851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.880084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.880113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.880372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.880398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.880612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.880637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.880880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.880908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.881133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.881162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.881410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.881438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.881703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.881732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.881960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.881989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.882230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.882257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.882491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.882520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.789 [2024-05-15 11:02:23.882759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.789 [2024-05-15 11:02:23.882790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.789 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.883027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.883054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.883290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.883318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.883549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.883577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.883804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.883830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.884071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.884098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.884367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.884395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.884621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.884646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.884876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.884904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.885159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.885188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.885442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.885468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.885704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.885732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.885999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.886028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.886297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.886322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.886507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.886533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.886767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.886795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.887056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.887083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.887307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.887333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.887565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.887593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.887840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.887865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.888110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.888139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.888369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.888397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.888627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.888653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.888889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.888918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.889135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.889164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.889405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.889435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.889644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.889673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.889897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.889925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.890201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.890227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.890432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.890461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.890680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.890708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.890941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.890967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.891155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.891180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.891416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.891446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.891679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.891705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.891943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.891972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.892208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.892236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.892445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.892470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.892666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.892694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.790 [2024-05-15 11:02:23.892942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.790 [2024-05-15 11:02:23.892972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.790 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.893180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.893205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.893414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.893444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.893704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.893732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.893950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.893977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.894216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.894245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.894473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.894501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.894706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.894731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.894966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.894995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.895202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.895230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.895438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.895465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.895685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.895713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.895913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.895946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.896153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.896183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.896356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.896381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.896620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.896648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.896878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.896903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.897145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.897172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.897442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.897471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.897685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.897710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.897921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.897954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.898193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.898222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.898487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.898512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.898753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.898781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.899001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.899032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.899277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.899303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.899549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.899574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.899840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.899869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.900126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.900152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.900392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.900420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.900658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.900686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.900945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.900971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.901226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.901251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.901489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.901515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.901690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.901716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.901948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.901977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.902209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.902236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.902486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.902512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.902756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.902784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.903014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.791 [2024-05-15 11:02:23.903044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.791 qpair failed and we were unable to recover it. 00:22:07.791 [2024-05-15 11:02:23.903251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.903281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.903528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.903554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.903790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.903818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.904057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.904087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.904320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.904348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.904606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.904635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.904867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.904892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.905108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.905134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.905342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.905368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.905548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.905573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.905809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.905834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.906067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.906096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.906336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.906361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.906623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.906648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.906906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.906941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.907187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.907213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.907463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.907489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.907713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.907742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.908006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.908033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.908274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.908302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.908561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.908589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.908846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.908871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.909108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.909134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.909317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.909343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.909611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.909636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.909896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.909924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.910137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.910165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.910400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.910425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.910656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.910684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.910889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.910917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.911162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.911188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.911400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.911429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.792 [2024-05-15 11:02:23.911684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.792 [2024-05-15 11:02:23.911713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.792 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.911977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.912004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.912226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.912255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.912483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.912512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.912767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.912793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.913008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.913036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.913272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.913301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.913529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.913554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.913756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.913784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.914018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.914051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.914261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.914286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.914500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.914526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.914743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.914771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.914979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.915006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.915238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.915267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.915522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.915548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.915727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.915752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.915961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.915990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.916221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.916246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.916457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.916482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.916718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.916747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.916953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.916982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.917249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.917274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.917487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.917516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.917748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.917776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.917999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.918025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.918264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.918289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.918513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.918541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.918772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.918797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.919059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.919088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.919325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.919350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.919551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.919577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.919824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.919852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.920114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.920143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.920398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.920424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.920677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.920705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.920940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.920970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.921148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.921173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.921370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.921398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.793 [2024-05-15 11:02:23.921633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.793 [2024-05-15 11:02:23.921661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.793 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.921925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.921955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.922195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.922224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.922460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.922488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.922717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.922743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.922943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.922973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.923193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.923222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.923443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.923468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.923729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.923758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.923961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.923990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.924222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.924248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.924512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.924539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.924774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.924803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.925078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.925104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.925352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.925381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.925588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.925616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.925846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.925871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.926104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.926133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.926343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.926373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.926607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.926634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.926889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.926917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.927161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.927188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.927401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.927427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.927637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.927662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.927874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.927908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.928129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.928154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.928366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.928391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.928572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.928600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.928837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.928863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.929066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.929095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.929352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.929380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.929610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.929635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.929890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.929919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.930156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.930184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.930397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.930424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.930636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.930664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.930872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.930897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.931090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.931117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.931363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.931392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.931647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.794 [2024-05-15 11:02:23.931673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.794 qpair failed and we were unable to recover it. 00:22:07.794 [2024-05-15 11:02:23.931883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.931909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.932100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.932128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.932343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.932372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.932577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.932603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.932864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.932892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.933163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.933193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.933453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.933479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.933722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.933750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.933985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.934014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.934227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.934254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.934479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.934507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.934736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.934764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.934997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.935024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.935263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.935292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.935514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.935542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.935776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.935801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.936041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.936072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.936332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.936358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.936561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.936586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.936820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.936849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.937051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.937079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.937313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.937338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.937570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.937598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.937797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.937825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.938069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.938096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.938357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.938385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.938620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.938648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.938857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.938882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.939091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.939118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.939329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.939358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.939589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.939614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.939847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.939875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.940111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.940140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.940347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.940372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.940609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.940637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.940861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.940889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.941134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.941160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.941435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.941461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.941686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.795 [2024-05-15 11:02:23.941714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.795 qpair failed and we were unable to recover it. 00:22:07.795 [2024-05-15 11:02:23.941922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.941954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.942219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.942245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.942478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.942506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.942735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.942760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.942977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.943007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.943247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.943275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.943534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.943559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.943974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.944004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.944280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.944309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.944550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.944575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.944817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.944845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.945106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.945132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.945344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.945369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.945636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.945671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.945899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.945925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.946138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.946163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.946401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.946431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.946688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.946716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.946908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.946942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.947190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.947219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.947429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.947457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.947711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.947736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.947976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.948005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.948238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.948266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.948496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.948524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.948759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.948787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.948987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.949016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.949234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.949260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.949463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.949492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.949714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.949742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.949955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.949982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.950221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.950249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.950480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.796 [2024-05-15 11:02:23.950505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.796 qpair failed and we were unable to recover it. 00:22:07.796 [2024-05-15 11:02:23.950712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.950738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.951003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.951033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.951238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.951266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.951525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.951550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.951787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.951815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.952040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.952069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.952307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.952332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.952549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.952579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.952758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.952784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.952972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.952998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.953211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.953240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.953471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.953499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.953729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.953755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.953995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.954023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.954227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.954255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.954479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.954505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.954712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.954740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.954999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.955039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.955271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.955296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.955513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.955538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.955751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.955779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.956032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.956058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.956297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.956326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.956582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.956610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.956842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.956867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.957072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.957102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.957357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.957385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.957578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.957603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.957836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.957864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.958104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.958133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.958396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.958421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.958665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.958692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.958902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.959020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.959267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.959292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.959553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.959584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.959817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.959844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.960076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.960102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.960362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.960386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.960638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.797 [2024-05-15 11:02:23.960665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.797 qpair failed and we were unable to recover it. 00:22:07.797 [2024-05-15 11:02:23.960919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.960952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.961180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.961206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.961449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.961478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.961709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.961735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.961999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.962025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.962244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.962286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.962526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.962551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.962776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.962820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.963061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.963092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.963355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.963381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.963631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.963657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.963914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.963954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.964188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.964213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.964462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.964490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.964692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.964722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.964936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.964963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.965201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.965227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.965470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.965499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.965755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.965781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.966046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.966075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.966287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.966315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.966551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.966577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.966790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.966818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.967089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.967121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.967350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.967377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.967663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.967692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.967890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.967921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.968186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.968212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.968446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.968474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.968741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.968767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.968949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.968975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.969212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.969241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.969485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.969513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.969771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.969797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.970038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.970068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.970300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.970328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.970564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.970597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.970837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.970867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.971111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.798 [2024-05-15 11:02:23.971141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.798 qpair failed and we were unable to recover it. 00:22:07.798 [2024-05-15 11:02:23.971343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.971369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.971606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.971634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.971865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.971891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.972121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.972147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.972395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.972426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.972683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.972712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.972943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.972969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.973188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.973216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.973483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.973512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.973720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.973746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.973955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.973981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.974197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.974239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.974450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.974477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.974656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.974683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.974900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.974927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.975130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.975158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.975373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.975401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.975603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.975632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.975841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.975866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.976100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.976146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.976343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.976372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.976575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.976600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.976809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.976835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.977067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.977096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.977305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.977336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.977553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.977581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.977829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.977855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.978069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.978095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.978329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.978358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.978595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.978620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.978827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.978854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.979106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.979137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.979368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.979397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.979663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.979689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.979916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.979950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.980190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.980219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.980464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.980490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.980694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.980724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.980972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.799 [2024-05-15 11:02:23.981001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.799 qpair failed and we were unable to recover it. 00:22:07.799 [2024-05-15 11:02:23.981259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.800 [2024-05-15 11:02:23.981285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.800 qpair failed and we were unable to recover it. 00:22:07.800 [2024-05-15 11:02:23.981531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.800 [2024-05-15 11:02:23.981557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.800 qpair failed and we were unable to recover it. 00:22:07.800 [2024-05-15 11:02:23.981764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.800 [2024-05-15 11:02:23.981790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.800 qpair failed and we were unable to recover it. 00:22:07.800 [2024-05-15 11:02:23.981971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.800 [2024-05-15 11:02:23.981997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.800 qpair failed and we were unable to recover it. 00:22:07.800 [2024-05-15 11:02:23.982206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.800 [2024-05-15 11:02:23.982235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.800 qpair failed and we were unable to recover it. 00:22:07.800 [2024-05-15 11:02:23.982472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.800 [2024-05-15 11:02:23.982501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.800 qpair failed and we were unable to recover it. 00:22:07.800 [2024-05-15 11:02:23.982774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.800 [2024-05-15 11:02:23.982800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.800 qpair failed and we were unable to recover it. 00:22:07.800 [2024-05-15 11:02:23.983054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.800 [2024-05-15 11:02:23.983083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:07.800 qpair failed and we were unable to recover it. 00:22:07.800 [2024-05-15 11:02:23.983300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.075 [2024-05-15 11:02:23.983329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.075 qpair failed and we were unable to recover it. 00:22:08.075 [2024-05-15 11:02:23.983585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.075 [2024-05-15 11:02:23.983611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.075 qpair failed and we were unable to recover it. 00:22:08.075 [2024-05-15 11:02:23.983793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.075 [2024-05-15 11:02:23.983819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.075 qpair failed and we were unable to recover it. 00:22:08.075 [2024-05-15 11:02:23.984066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.075 [2024-05-15 11:02:23.984095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.075 qpair failed and we were unable to recover it. 00:22:08.075 [2024-05-15 11:02:23.984325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.075 [2024-05-15 11:02:23.984355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.075 qpair failed and we were unable to recover it. 00:22:08.075 [2024-05-15 11:02:23.984622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.075 [2024-05-15 11:02:23.984651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.075 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.984868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.984897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.985117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.985144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.985329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.985355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.985621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.985650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.985879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.985904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.986112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.986142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.986363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.986389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.986592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.986618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.986821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.986849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.987105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.987135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.987376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.987402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.987622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.987652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.987893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.987919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.988163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.988189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.988432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.988461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.988687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.988715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.988927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.988957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.989185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.989214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.989445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.989470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.989654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.989681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.989927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.989961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.990192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.990220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.990477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.990503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.990718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.990746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.990979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.991009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.991239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.991265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.991498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.991526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.991788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.991816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.992024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.992052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.992290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.992319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.992576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.992604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.992828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.992853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.993064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.993093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.993358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.993383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.993618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.993644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.993884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.076 [2024-05-15 11:02:23.993912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.076 qpair failed and we were unable to recover it. 00:22:08.076 [2024-05-15 11:02:23.994121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.994150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.994366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.994392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.994569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.994595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.994809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.994838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.995060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.995087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.995294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.995322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.995552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.995580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.995816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.995841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.996047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.996073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.996282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.996310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.996532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.996557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.996767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.996795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.997054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.997080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.997332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.997358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.997626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.997651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.997822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.997847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.998042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.998068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.998291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.998319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.998543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.998571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.998819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.998845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.999070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.999100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.999333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.999358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.999567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.999593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:23.999839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:23.999868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.000105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.000134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.000402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.000428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.000639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.000668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.000874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.000904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.001136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.001163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.001427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.001456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.001665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.001698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.001925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.001956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.002165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.002193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.002451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.002479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.002702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.002729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.002982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.003014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.003191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.003217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.003414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.003439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.003637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.077 [2024-05-15 11:02:24.003663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.077 qpair failed and we were unable to recover it. 00:22:08.077 [2024-05-15 11:02:24.003877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.003905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.004180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.004206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.004445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.004473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.004738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.004766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.004978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.005005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.005263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.005288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.005521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.005549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.005779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.005804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.006020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.006049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.006283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.006310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.006491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.006516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.006724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.006766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.006969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.006999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.007211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.007237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.007468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.007495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.007700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.007730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.008003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.008030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.008301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.008326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.008560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.008607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.008864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.008890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.009132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.009161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.009415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.009444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.009655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.009680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.009881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.009907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.010142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.010171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.010403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.010428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.010667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.010695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.010886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.010916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.011168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.011195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.011465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.011494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.011731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.011758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.011990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.012016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.012267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.012296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.012522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.012550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.012756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.012781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.012990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.013031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.013268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.013296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.013528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.013555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.013739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.078 [2024-05-15 11:02:24.013765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.078 qpair failed and we were unable to recover it. 00:22:08.078 [2024-05-15 11:02:24.013967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.013996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.014228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.014253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.014517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.014546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.014778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.014803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.015010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.015037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.015266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.015295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.015524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.015557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.015818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.015844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.016108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.016137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.016359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.016387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.016608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.016634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.016868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.016897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.017147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.017173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.017373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.017399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.017608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.017636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.017891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.017919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.018142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.018168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.018395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.018423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.018655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.018685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.018924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.018962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.019182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.019208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.019443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.019486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.019723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.019748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.019962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.019991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.020191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.020219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.020438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.020463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.020726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.020754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.021019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.021048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.021312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.021337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.021575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.021603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.021811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.021839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.022058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.022084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.022320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.022348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.022589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.022616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.022852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.022878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.023148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.023175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.023422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.023450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.023682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.023707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.079 [2024-05-15 11:02:24.023964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.079 [2024-05-15 11:02:24.023993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.079 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.024216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.024244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.024489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.024514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.024771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.024799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.025064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.025093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.025315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.025340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.025610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.025635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.025867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.025896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.026093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.026119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.026355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.026388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.026619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.026647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.026869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.026895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.027113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.027140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.027354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.027382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.027608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.027634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.027860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.027888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.028166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.028192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.028437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.028462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.028670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.028699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.028928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.028963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.029164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.029189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.029385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.029413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.029614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.029642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.029860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.029885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.030124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.030150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.030385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.030413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.030622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.030647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.030887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.030915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.031165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.031191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.031369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.031394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.031571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.031596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.080 [2024-05-15 11:02:24.031809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.080 [2024-05-15 11:02:24.031848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.080 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.032120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.032146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.032391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.032419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.032625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.032653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.032904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.032935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.033129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.033158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.033335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.033360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.033568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.033593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.033854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.033884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.034102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.034131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.034351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.034376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.034612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.034643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.034893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.034921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.035150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.035176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.035362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.035391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.035601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.035629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.035855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.035880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.036070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.036097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.036303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.036331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.036532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.036557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.036773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.036798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.036972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.036999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.037207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.037233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.037432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.037457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.037713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.037742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.037979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.038005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.038217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.038245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.038483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.038511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.038716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.038741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.038954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.038997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.039257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.039283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.039467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.039492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.039724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.039757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.040022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.040051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.040269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.040294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.040540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.040565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.040778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.040806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.041039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.041065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.041332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.041360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.041586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.081 [2024-05-15 11:02:24.041616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.081 qpair failed and we were unable to recover it. 00:22:08.081 [2024-05-15 11:02:24.041824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.041851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.042082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.042112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.042318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.042343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.042547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.042573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.042750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.042776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.042970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.042997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.043205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.043230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.043474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.043500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.043760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.043788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.044023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.044050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.044289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.044318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.044529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.044557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.044762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.044787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.045017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.045046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.045278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.045307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.045568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.045594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.045832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.045860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.046124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.046149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.046356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.046381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.046648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.046676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.046884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.046912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.047128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.047154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.047357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.047382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.047645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.047673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.047909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.047946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.048161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.048187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.048430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.048458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.048714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.048739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.048984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.049014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.049242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.049270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.049501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.049526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.049788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.049816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.050062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.050089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.050303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.050330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.050543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.050571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.050792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.050820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.051055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.051082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.051318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.051344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.051579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.082 [2024-05-15 11:02:24.051608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.082 qpair failed and we were unable to recover it. 00:22:08.082 [2024-05-15 11:02:24.051842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.051868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.052120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.052149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.052390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.052418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.052644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.052669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.052941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.052967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.053146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.053173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.053475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.053501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.053772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.053800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.054042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.054071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.054281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.054306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.054545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.054573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.054812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.054837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.055015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.055042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.055281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.055309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.055536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.055564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.055800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.055827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.056030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.056058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.056318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.056347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.056576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.056601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.056831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.056859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.057073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.057102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.057299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.057329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.057562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.057590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.057820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.057845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.058063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.058089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.058335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.058362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.058598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.058626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.058858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.058883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.059091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.059117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.059310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.059336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.059593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.059618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.059832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.059861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.060090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.060121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.060329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.060354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.060562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.060604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.060851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.060876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.061088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.061113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.061298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.061323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.061583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.083 [2024-05-15 11:02:24.061611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.083 qpair failed and we were unable to recover it. 00:22:08.083 [2024-05-15 11:02:24.061873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.061898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.062114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.062139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.062356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.062384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.062635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.062661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.062848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.062873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.063077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.063104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.063347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.063372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.063646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.063671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.063949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.063978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.064206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.064236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.064485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.064511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.064747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.064775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.065005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.065031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.065250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.065278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.065485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.065513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.065781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.065806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.066028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.066058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.066349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.066378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.066645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.066671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.066916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.066959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.067200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.067229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.067498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.067523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.067738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.067768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.067972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.068005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.068247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.068272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.068494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.068524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.068760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.068788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.069006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.069033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.069239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.069269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.069497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.069525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.069784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.069810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.070020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.070048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.070288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.070314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.070532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.070558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.070773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.070802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.084 qpair failed and we were unable to recover it. 00:22:08.084 [2024-05-15 11:02:24.071007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.084 [2024-05-15 11:02:24.071037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.071261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.071290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.071579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.071608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.071815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.071843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.072053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.072080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.072283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.072311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.072546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.072575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.072843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.072869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.073107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.073137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.073333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.073361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.073575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.073600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.073812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.073852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.074060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.074089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.074296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.074322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.074538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.074567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.074781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.074810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.075018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.075044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.075283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.075312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.075550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.075576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.075786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.075812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.076046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.076076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.076288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.076329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.076537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.076563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.076746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.076771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.076948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.076975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.077203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.077229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.077466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.077494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.077706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.077735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.077975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.078002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.078226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.078256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.078529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.078558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.078790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.078816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.079062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.079091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.079326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.079354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.079607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.085 [2024-05-15 11:02:24.079633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.085 qpair failed and we were unable to recover it. 00:22:08.085 [2024-05-15 11:02:24.079870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.079896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.080172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.080202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.080534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.080586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.080814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.080840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.081048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.081090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.081287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.081317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.081560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.081589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.081791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.081820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.082029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.082058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.082296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.082324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.082731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.082791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.083050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.083077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.083317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.083344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.083535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.083561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.083741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.083768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.084002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.084029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.084250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.084279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.084474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.084500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.084699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.084728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.084974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.085001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.085252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.085284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.085544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.085573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.085807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.085833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.086049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.086078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.086327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.086356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.086612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.086641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.086880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.086908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.087145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.087173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.087416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.087445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.087701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.087730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.087971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.088003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.088250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.088277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.088486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.088516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.088755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.088781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.088998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.089029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.089244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.089273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.089517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.089546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.089812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.089841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.090075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.086 [2024-05-15 11:02:24.090108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.086 qpair failed and we were unable to recover it. 00:22:08.086 [2024-05-15 11:02:24.090375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.090400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.090622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.090651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.090849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.090878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.091112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.091138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.091347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.091373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.091573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.091602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.091805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.091833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.092073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.092103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.092339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.092364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.092589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.092620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.092819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.092848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.093080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.093110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.093351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.093377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.093619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.093647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.093900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.093946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.094158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.094185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.094369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.094396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.094634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.094662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.094874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.094903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.095124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.095153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.095377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.095403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.095633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.095661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.095890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.095922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.096157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.096186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.096398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.096424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.096657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.096684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.096887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.096916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.097176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.097205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.097438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.097464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.097703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.097732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.097970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.097999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.098232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.098261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.098498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.098524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.098708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.098733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.098940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.098970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.099211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.099236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.099469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.099495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.099755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.099781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.100021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.100050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.087 qpair failed and we were unable to recover it. 00:22:08.087 [2024-05-15 11:02:24.100363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.087 [2024-05-15 11:02:24.100425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.100653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.100678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.100916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.100956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.101234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.101260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.101518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.101546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.101805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.101831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.102049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.102079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.102346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.102371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.102584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.102610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.102884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.102909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.103123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.103148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.103400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.103429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.103732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.103757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.103998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.104024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.104267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.104295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.104518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.104547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.104825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.104850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.105066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.105093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.105344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.105373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.105607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.105635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.105865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.105893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.106163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.106190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.106441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.106469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.106676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.106704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.106946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.106977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.107207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.107233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.107467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.107495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.107722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.107750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.107977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.108005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.108241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.108267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.108484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.108512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.108748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.108773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.108967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.108994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.109233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.109258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.109523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.109548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.109769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.109798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.110064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.110090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.110282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.110307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.110547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.110573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.088 [2024-05-15 11:02:24.110812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.088 [2024-05-15 11:02:24.110840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.088 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.111079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.111105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.111354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.111379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.111619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.111647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.111877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.111905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.112175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.112204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.112464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.112489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.112727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.112757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.112964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.112995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.113221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.113250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.113511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.113536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.113774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.113803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.114070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.114100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.114307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.114333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.114507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.114532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.114706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.114733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.114936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.114967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.115203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.115229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.115403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.115428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.115631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.115659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.115858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.115887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.116153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.116182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.116395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.116421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.116652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.116694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.116928] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.116972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.117182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.117210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.117414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.117440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.117708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.117736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.117953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.117979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.118169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.118195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.118418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.118443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.118642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.118670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.118882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.118910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.089 qpair failed and we were unable to recover it. 00:22:08.089 [2024-05-15 11:02:24.119137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.089 [2024-05-15 11:02:24.119165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.119370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.119396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.119600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.119629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.119870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.119895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.120141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.120176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.120381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.120406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.120643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.120675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.120878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.120906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.121149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.121179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.121386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.121411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.121680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.121709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.121949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.121975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.122189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.122218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.122444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.122469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.122706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.122734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.122937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.122966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.123170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.123196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.123435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.123460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.123707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.123736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.123970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.123999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.124243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.124273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.124494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.124519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.124751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.124779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.125018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.125044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.125327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.125379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.125637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.125663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.125939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.125969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.126201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.126227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.126437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.126464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.126673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.126701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.126916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.126952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.127214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.127243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.127553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.127581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.127815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.127841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.128118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.128147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.128382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.128407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.128608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.128633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.128840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.128866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.129108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.129138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.090 qpair failed and we were unable to recover it. 00:22:08.090 [2024-05-15 11:02:24.129375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.090 [2024-05-15 11:02:24.129403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.129827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.129879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.130109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.130135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.130374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.130404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.130631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.130657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.130904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.130938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.131155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.131181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.131407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.131435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.131684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.131712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.131912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.131948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.132185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.132211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.132395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.132420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.132634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.132676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.132868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.132897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.133113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.133140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.133347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.133375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.133636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.133661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.133869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.133897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.134110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.134137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.134353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.134379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.134612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.134641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.134902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.134937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.135172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.135198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.135396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.135424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.135676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.135705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.135943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.135972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.136178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.136203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.136444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.136474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.136705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.136733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.136944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.136977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.137211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.137236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.137472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.137501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.137735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.137763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.137959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.137988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.138245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.138271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.138476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.138506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.138714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.138754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.139022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.139051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.139256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.139281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.091 qpair failed and we were unable to recover it. 00:22:08.091 [2024-05-15 11:02:24.139514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.091 [2024-05-15 11:02:24.139557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.139761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.139791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.140039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.140068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.140283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.140308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.140677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.140727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.140957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.140987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.141220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.141249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.141457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.141482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.141698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.141739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.141991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.142017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.142228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.142253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.142438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.142463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.142659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.142685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.142951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.142980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.143188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.143216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.143428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.143454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.143661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.143687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.143944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.143971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.144273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.144322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.144527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.144553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.144787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.144816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.145048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.145078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.145304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.145333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.145580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.145611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.145849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.145877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.146124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.146150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.146360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.146389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.146623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.146648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.146838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.146863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.147125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.147155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.147468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.147496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.147732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.147757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.147971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.148000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.148230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.148258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.148470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.148500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.148743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.148768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.148954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.148981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.149223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.149266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.149523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.149552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.092 qpair failed and we were unable to recover it. 00:22:08.092 [2024-05-15 11:02:24.149747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.092 [2024-05-15 11:02:24.149772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.150038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.150068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.150300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.150330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.150593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.150619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.150855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.150880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.151091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.151117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.151351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.151379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.151683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.151741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.152003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.152030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.152240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.152270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.152494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.152523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.152884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.152951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.153219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.153245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.153471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.153499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.153731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.153759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.154032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.154058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.154246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.154271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.154475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.154501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.154689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.154714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.154925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.154971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.155172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.155197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.155404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.155430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.155641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.155669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.155927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.155959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.156147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.156173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.156365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.156392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.156580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.156607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.156842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.156870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.157112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.157138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.157372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.157401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.157658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.157683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.157918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.157954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.158180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.158207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.158392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.158418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.158618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.158646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.158851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.158879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.159095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.159122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.159320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.159349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.159578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.159607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.093 [2024-05-15 11:02:24.159822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.093 [2024-05-15 11:02:24.159851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.093 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.160078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.160104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.160376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.160401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.160609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.160637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.160840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.160867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.161095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.161121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.161296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.161321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.161520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.161546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.161787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.161812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.162048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.162074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.162338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.162366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.162595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.162623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.162888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.162916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.163178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.163206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.163385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.163411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.163690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.163718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.163953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.163982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.164238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.164263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.164510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.164538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.164775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.164805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.165064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.165093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.165348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.165373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.165605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.165630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.165838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.165863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.166110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.166139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.166343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.166368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.166599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.166627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.166896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.166924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.167179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.167208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.167439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.167465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.167681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.167709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.167949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.167976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.168193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.168220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.094 [2024-05-15 11:02:24.168452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.094 [2024-05-15 11:02:24.168477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.094 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.168707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.168735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.168988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.169017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.169229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.169257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.169521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.169546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.169822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.169850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.170080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.170106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.170369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.170401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.170660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.170686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.170916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.170958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.171205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.171230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.171424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.171450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.171625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.171650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.171854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.171879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.172058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.172085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.172270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.172296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.172544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.172571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.172750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.172776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.172997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.173026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.173314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.173366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.173623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.173648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.173859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.173887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.174117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.174143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.174347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.174375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.174596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.174621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.174856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.174884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.175115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.175145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.175370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.175398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.175630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.175656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.175950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.175977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.176194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.176223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.176601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.176662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.176896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.176921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.177153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.177181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.177412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.177446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.177671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.177699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.177958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.177984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.178198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.178226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.178433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.178461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.095 [2024-05-15 11:02:24.178734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.095 [2024-05-15 11:02:24.178759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.095 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.178970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.178996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.179209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.179237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.179447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.179476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.179683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.179711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.179918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.179949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.180154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.180182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.180411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.180439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.180698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.180724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.180939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.180965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.181184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.181212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.181445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.181473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.181702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.181730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.181984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.182011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.182226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.182251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.182486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.182514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.182833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.182893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.183137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.183164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.183378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.183407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.183650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.183676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.183889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.183914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.184121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.184147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.184355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.184380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.184602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.184646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.184916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.184948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.185126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.185151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.185355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.185383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.185626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.185651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.185915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.185950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.186188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.186214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.186454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.186482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.186736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.186765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.186993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.187023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.187259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.187284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.187559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.187584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.187803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.187845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.188085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.188114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.188352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.188377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.188635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.188663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.188925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.096 [2024-05-15 11:02:24.188958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.096 qpair failed and we were unable to recover it. 00:22:08.096 [2024-05-15 11:02:24.189191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.189219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.189431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.189457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.189711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.189739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.190002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.190030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.190243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.190271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.190519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.190544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.190780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.190808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.191037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.191067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.191299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.191324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.191534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.191559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.191791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.191820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.192031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.192060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.192255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.192283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.192489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.192516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.192753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.192782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.193045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.193071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.193253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.193279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.193461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.193486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.193672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.193697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.193936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.193965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.194164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.194193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.194453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.194478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.194741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.194769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.195026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.195061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.195572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.195600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.195813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.195838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.196075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.196104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.196338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.196368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.196745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.196804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.197011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.197037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.197280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.197308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.197516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.197543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.197780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.197809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.198066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.198092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.198330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.198356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.198568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.198594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.198827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.198855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.199077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.199105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.199348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.097 [2024-05-15 11:02:24.199377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.097 qpair failed and we were unable to recover it. 00:22:08.097 [2024-05-15 11:02:24.199608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.199636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.199871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.199899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.200154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.200181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.200416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.200444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.200683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.200712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.200913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.200948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.201159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.201184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.201454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.201482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.201706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.201734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.201946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.201976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.202201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.202227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.202464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.202496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.202734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.202763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.202999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.203030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.203242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.203269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.203505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.203530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.203769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.203794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.204055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.204084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.204314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.204340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.204577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.204604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.204824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.204852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.205077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.205106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.205313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.205339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.205582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.205607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.205801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.205829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.206089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.206119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.206314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.206339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.206545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.206575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.206808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.206834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.207276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.207326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.207539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.098 [2024-05-15 11:02:24.207564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.098 qpair failed and we were unable to recover it. 00:22:08.098 [2024-05-15 11:02:24.207756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.207781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.208021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.208049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.208255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.208281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.208517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.208542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.208889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.208913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.209129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.209170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.209520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.209571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.209791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.209822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.210033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.210062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.210269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.210294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.210606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.210663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.210927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.210959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.211208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.211236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.211494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.211522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.211783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.211811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.212028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.212054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.212289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.212317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.212543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.212571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.212920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.212986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.213213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.213239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.213431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.213457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.213697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.213725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.214000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.214026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.214208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.214233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.214415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.214441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.214677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.214705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.214907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.214939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.215173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.215200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.215449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.215477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.215683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.215711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.215944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.215971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.216176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.216201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.216435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.216463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.216695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.216723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.216951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.216981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.217221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.217249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.217432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.217459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.217725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.217753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.218019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.218048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.218285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.218311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.218557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.218585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.099 [2024-05-15 11:02:24.218820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.099 [2024-05-15 11:02:24.218848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.099 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.219048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.219077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.219305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.219331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.219531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.219559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.219791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.219819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.220052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.220081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.220311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.220336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.220572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.220601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.220829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.220859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.221086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.221116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.221359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.221385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.221693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.221719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.221936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.221961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.222185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.222227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.222474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.222499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.222748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.222776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.223042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.223071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.223442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.223493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.223722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.223747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.223927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.223959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.224172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.224213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.224536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.224604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.224846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.224871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.225124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.225150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.225354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.225380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.225723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.225775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.226031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.226056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.226294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.226319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.226503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.226528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.226710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.226735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.227039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.227080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.227347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.227375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.227636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.227661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.227892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.227920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.228141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.228171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.228436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.228465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.228721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.228750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.228987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.229017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.229220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.229246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.229485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.229513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.229724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.229750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.229985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.230015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.100 qpair failed and we were unable to recover it. 00:22:08.100 [2024-05-15 11:02:24.230278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.100 [2024-05-15 11:02:24.230303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.230540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.230568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.230824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.230851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.231080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.231109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.231375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.231401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.231654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.231679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.231921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.231955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.232219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.232244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.232490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.232516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.232781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.232809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.233058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.233085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.233289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.233319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.233558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.233585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.233790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.233819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.234059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.234085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.234316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.234345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.234544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.234569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.234777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.234806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.235040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.235066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.235275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.235305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.235485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.235510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.235693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.235719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.235937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.235966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.236194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.236222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.236485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.236511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.236751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.236776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.236996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.237023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.237257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.237284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.237497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.237522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.237763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.237791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.238024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.238053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.238263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.238291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.238495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.238522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.238764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.238793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.239038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.239064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.239242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.239268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.239475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.239501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.239727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.239769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.240004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.240033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.240239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.240267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.240499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.240524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.240787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.101 [2024-05-15 11:02:24.240815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.101 qpair failed and we were unable to recover it. 00:22:08.101 [2024-05-15 11:02:24.241044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.241073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.241441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.241487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.241743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.241768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.242011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.242040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.242256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.242284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.242542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.242606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.242865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.242890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.243107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.243133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.243372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.243401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.243603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.243631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.243859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.243884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.244149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.244178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.244432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.244461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.244697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.244722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.244960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.244986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.245225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.245253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.245485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.245510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.245726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.245766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.246012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.246039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.246276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.246306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.246566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.246594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.246822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.246851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.247083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.247109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.247364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.247393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.247657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.247685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.247945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.247971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.248174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.248199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.248439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.248467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.248701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.248727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.248974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.249003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.249261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.249286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.249536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.249564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.249830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.249858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.250087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.250118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.250351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.250377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.250558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.250583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.102 qpair failed and we were unable to recover it. 00:22:08.102 [2024-05-15 11:02:24.250758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.102 [2024-05-15 11:02:24.250785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.250964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.250991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.251179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.251204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.251431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.251473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.251705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.251730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.251967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.251996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.252227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.252252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.252526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.252554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.252786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.252813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.253014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.253045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.253233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.253259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.253515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.253543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.253776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.253804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.254011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.254040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.254247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.254273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.254469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.254497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.254767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.254795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.255063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.255092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.255327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.255352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.255534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.255560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.255734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.255759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.255994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.256023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.256223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.256249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.256482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.256511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.256714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.256743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.256954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.256983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.257195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.257220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.257454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.257482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.257686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.257716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.257918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.257957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.258187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.258213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.258481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.258509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.258737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.258765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.259022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.259048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.259260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.259286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.259507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.259532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.259737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.259769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.260004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.260033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.260256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.260283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.260528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.103 [2024-05-15 11:02:24.260553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.103 qpair failed and we were unable to recover it. 00:22:08.103 [2024-05-15 11:02:24.260756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.260784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.261035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.261062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.261277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.261302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.261509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.261538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.261776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.261801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.262022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.262052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.262263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.262289] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.262473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.262498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.262749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.262778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.263012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.263041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.263287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.263312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.263547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.263575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.263813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.263839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.264139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.264168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.264399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.264424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.264660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.264689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.264955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.264985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.265248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.265276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.265512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.265537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.265769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.265812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.266061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.266087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.266449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.266479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.266730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.266755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.266961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.267005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.267248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.267274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.267460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.267486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.267727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.267753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.268016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.268045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.268240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.268268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.268529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.268555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.268767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.268793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.269004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.269034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.269261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.269287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.269541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.269569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.269803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.269828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.270009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.270035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.270269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.270297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.270731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.270781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.271047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.271073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.271315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.271345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.271614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.271639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.271877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.271905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.272141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.272167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.104 [2024-05-15 11:02:24.272437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.104 [2024-05-15 11:02:24.272465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.104 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.272692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.272717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.272911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.272951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.273178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.273204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.273409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.273437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.273638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.273666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.273868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.273896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.274101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.274127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.274350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.274376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.274611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.274639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.274875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.274903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.275128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.275154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.275404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.275432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.275661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.275686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.275919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.275955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.276192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.276221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.276440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.276469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.276668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.276697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.276941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.276970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.277201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.277227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.277466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.277495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.277732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.277761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.277996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.278026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.278230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.278256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.278514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.278543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.278759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.278785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.279019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.279048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.279309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.279334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.279597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.279626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.279891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.279919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.280190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.280219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.280455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.280480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.280721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.280749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.280986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.281015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.281239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.281267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.281527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.281553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.281781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.281809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.282039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.282068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.282483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.282535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.282788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.282814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.283050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.283079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.283313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.283341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.283559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.283587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.283842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.283867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.284047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.105 [2024-05-15 11:02:24.284073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.105 qpair failed and we were unable to recover it. 00:22:08.105 [2024-05-15 11:02:24.284323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.284351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.284653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.284681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.284910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.284943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.285162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.285196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.285454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.285483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.285711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.285741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.286008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.286034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.286294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.286322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.286587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.286615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.286920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.286955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.287204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.287230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.287480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.287508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.287722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.287750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.287985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.288014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.288247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.288272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.288509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.288538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.288764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.288792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.289030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.289059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.289266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.289293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.289536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.289565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.289797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.289825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.290054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.290084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.290316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.290342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.290610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.290642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.106 [2024-05-15 11:02:24.290879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.106 [2024-05-15 11:02:24.290907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.106 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.291160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.291185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.291398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.291424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.291634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.291662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.291865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.291893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.292159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.292188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.292394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.292424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.292612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.292638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.292881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.292910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.293171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.293200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.293452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.293478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.293718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.293747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.293976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.294006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.294270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.294295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.294532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.294557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.294831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.294856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.295044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.295070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.295371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.295433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.295671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.295696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.295918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.295966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.296205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.296234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.296449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.296477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.296678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.296704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.296968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.296997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.297263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.297288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.297657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.297723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.297957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.297984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.298221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.298249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.298501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.298526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.298756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.298784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.299015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.299041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.299285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.299315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.299545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.299574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.299800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.299828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.300054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.300081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.300313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.300341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.383 [2024-05-15 11:02:24.300572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.383 [2024-05-15 11:02:24.300600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.383 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.300979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.301007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.301270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.301296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.301547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.301575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.301818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.301846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.302061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.302087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.302293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.302319] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.302559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.302588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.302796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.302824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.303046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.303075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.303318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.303344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.303590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.303618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.303885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.303913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.304161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.304190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.304444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.304470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.304698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.304726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.304922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.304966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.305211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.305237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.305436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.305463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.305645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.305672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.305913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.305951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.306169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.306194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.306428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.306454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.306671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.306699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.306954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.306986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.307228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.307253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.307439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.307465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.307712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.307740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.307941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.307970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.308203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.308231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.308466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.308491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.308730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.308758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.309014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.309040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.309279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.309307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.309538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.309563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.309798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.309826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.310082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.310111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.310338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.310366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.310598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.310627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.384 qpair failed and we were unable to recover it. 00:22:08.384 [2024-05-15 11:02:24.310893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.384 [2024-05-15 11:02:24.310921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.311138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.311167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.311365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.311393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.311590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.311615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.311852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.311880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.312094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.312123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.312512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.312571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.312806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.312831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.313091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.313120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.313323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.313352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.313604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.313630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.313812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.313837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.314045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.314074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.314311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.314340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.314666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.314694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.314922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.314954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.315170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.315212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.315418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.315446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.315652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.315682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.315941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.315967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.316185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.316214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.316515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.316543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.316987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.317016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.317227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.317253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.317451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.317481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.317727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.317755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.317981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.318018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.318247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.318273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.318544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.318570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.318753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.318779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.318989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.319031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.319244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.319270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.319532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.319560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.319797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.319822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.320082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.320111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.320352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.320378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.320617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.320647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.320876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.320904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.321119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.321148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.385 qpair failed and we were unable to recover it. 00:22:08.385 [2024-05-15 11:02:24.321393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.385 [2024-05-15 11:02:24.321419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.321674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.321704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.321974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.322004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.322270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.322298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.322539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.322565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.322807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.322835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.323079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.323108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.323493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.323556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.323783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.323808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.324033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.324062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.324293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.324323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.324718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.324776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.325003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.325030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.325271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.325299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.325552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.325585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.326007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.326039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.326253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.326279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.326508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.326536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.326795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.326823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.327017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.327046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.327258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.327284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.327500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.327541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.327754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.327780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.328035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.328063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.328297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.328323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.328542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.328571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.328807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.328837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.329098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.329124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.329336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.329362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.329589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.329617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.329857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.329882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.330050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.330077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.330310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.330336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.330576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.330607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.330821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.330850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.331060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.331090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.331320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.331346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.331576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.331605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.331859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.331887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.386 qpair failed and we were unable to recover it. 00:22:08.386 [2024-05-15 11:02:24.332101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.386 [2024-05-15 11:02:24.332127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.332336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.332362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.332590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.332618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.332865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.332894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.333165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.333191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.333368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.333393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.333626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.333654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.333881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.333909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.334135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.334165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.334396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.334421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.334655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.334683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.334945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.334972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.335177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.335204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.335434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.335460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.335755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.335781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.336059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.336085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.336352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.336380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.336612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.336638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.336874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.336902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.337151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.337181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.337387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.337417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.337653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.337680] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.337896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.337923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.338195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.338224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.338646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.338700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.338923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.338957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.339224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.339252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.339495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.339520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.339754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.339783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.340066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.340092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.387 [2024-05-15 11:02:24.340378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.387 [2024-05-15 11:02:24.340404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.387 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.340661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.340689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.340956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.340985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.341216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.341241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.341480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.341509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.341753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.341780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.341961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.341987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.342184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.342209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.342393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.342418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.342604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.342630] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.342858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.342884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.343115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.343141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.343383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.343409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.343648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.343681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.343916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.343954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.344188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.344213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.344416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.344444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.344649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.344674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.344903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.344940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.345168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.345194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.345380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.345406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.345619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.345647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.345878] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.345906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.346145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.346171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.346404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.346432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.346634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.346662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.346918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.346956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.347199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.347226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.347440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.347469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.347722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.347750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.347982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.348010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.348265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.348290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.348530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.348558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.348792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.348820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.349023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.349052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.349284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.349309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.349510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.349538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.349767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.349797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.350034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.350064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.388 qpair failed and we were unable to recover it. 00:22:08.388 [2024-05-15 11:02:24.350296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.388 [2024-05-15 11:02:24.350322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.350564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.350597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.350817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.350843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.351076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.351102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.351288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.351314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.351561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.351587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.351805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.351834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.352070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.352100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.352341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.352369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.352559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.352584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.352798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.352826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.353060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.353089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.353305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.353330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.353573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.353602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.353832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.353861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.354075] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.354105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.354319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.354344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.354554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.354579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.354778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.354804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.355046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.355076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.355284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.355310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.355544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.355573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.355803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.355831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.356071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.356100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.356333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.356358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.356540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.356566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.356736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.356765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.356981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.357011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.357246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.357271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.357485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.357513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.357742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.357770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.358006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.358032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.358252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.358277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.358544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.358573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.358777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.358802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.358979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.359005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.359215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.359241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.359453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.359494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.359722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.359751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.389 [2024-05-15 11:02:24.359991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.389 [2024-05-15 11:02:24.360020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.389 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.360249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.360275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.360486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.360516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.360762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.360791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.361048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.361077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.361284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.361310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.361570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.361599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.361794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.361824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.362058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.362089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.362322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.362348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.362594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.362619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.362832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.362858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.363052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.363079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.363334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.363359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.363552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.363578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.363786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.363811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.364042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.364070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.364328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.364354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.364580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.364609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.364864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.364893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.365139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.365168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.365426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.365452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.365683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.365708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.365922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.365955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.366166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.366194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.366429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.366456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.366719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.366745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.366923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.366956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.367251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.367277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.367519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.367545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.367849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.367883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.368126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.368155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.368448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.368474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.368693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.368719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.368957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.368986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.369185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.369213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.369654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.369703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.369942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.369969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.370224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.370252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.370460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.390 [2024-05-15 11:02:24.370488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.390 qpair failed and we were unable to recover it. 00:22:08.390 [2024-05-15 11:02:24.370809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.370860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.371113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.371139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.371376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.371407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.371711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.371740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.371956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.371986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.372221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.372246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.372455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.372483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.372709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.372737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.372939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.372969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.373233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.373258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.373498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.373526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.373799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.373827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.374060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.374087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.374319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.374343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.374616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.374645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.374917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.374954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.375197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.375222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.375403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.375433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.375669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.375698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.375921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.375957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.376166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.376195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.376430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.376455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.376664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.376692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.376951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.376977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.377151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.377175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.377377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.377403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.377638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.377666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.377937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.377966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.378202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.378233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.378436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.378462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.378675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.378704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.378948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.378974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.379234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.379263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.379469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.379494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.379766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.379795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.380061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.380087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.380267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.380293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.380504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.380529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.380764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.391 [2024-05-15 11:02:24.380793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.391 qpair failed and we were unable to recover it. 00:22:08.391 [2024-05-15 11:02:24.381044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.381076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.381332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.381382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.381619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.381645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.381847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.381875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.382112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.382138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.382419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.382476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.382703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.382731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.382975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.383004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.383198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.383226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.383556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.383618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.383860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.383885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.384170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.384198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.384384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.384412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.384696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.384747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.385005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.385032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.385275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.385303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.385504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.385533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.385795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.385825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.386086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.386112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.386317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.386343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.386555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.386583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.386892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.386965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.387181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.387208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.387477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.387503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.387714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.387742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.387991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.388020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.388226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.388251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.388453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.388480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.388680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.388710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.388941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.388970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.389206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.389235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.389473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.389501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.392 [2024-05-15 11:02:24.389757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.392 [2024-05-15 11:02:24.389786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.392 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.390060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.390090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.390302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.390328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.390559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.390589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.390814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.390843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.391081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.391111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.391311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.391337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.391575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.391603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.391834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.391860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.392070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.392112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.392329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.392354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.392597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.392629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.392888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.392916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.393167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.393196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.393444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.393471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.393707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.393736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.393941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.393971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.394195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.394223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.394437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.394462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.394691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.394719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.395062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.395105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.395340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.395369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.395599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.395624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.395810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.395835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.396069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.396098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.396338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.396367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.396600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.396625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.396847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.396875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.397133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.397162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.397509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.397564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.397767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.397794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.398009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.398039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.398240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.398269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.398557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.398583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.398762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.398787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.399029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.399058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.399296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.399325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.399549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.399577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.399815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.399841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.400054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.393 [2024-05-15 11:02:24.400081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.393 qpair failed and we were unable to recover it. 00:22:08.393 [2024-05-15 11:02:24.400293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.400318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.400530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.400576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.400775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.400801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.401010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.401037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.401250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.401278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.401562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.401590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.401841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.401868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.402122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.402149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.402450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.402478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.402768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.402796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.403005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.403031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.403238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.403269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.403473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.403502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.403706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.403734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.403972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.403998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.404238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.404268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.404527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.404553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.404785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.404830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.405054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.405080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.405319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.405348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.405588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.405616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.405874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.405903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.406167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.406194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.406430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.406461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.406724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.406752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.406983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.407012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.407245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.407270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.407516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.407544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.407801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.407836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.408096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.408128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.408377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.408402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.408609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.408635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.408813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.408838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.409057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.409083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.409293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.409320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.409565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.409593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.409825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.409851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.410118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.410147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.394 qpair failed and we were unable to recover it. 00:22:08.394 [2024-05-15 11:02:24.410358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.394 [2024-05-15 11:02:24.410383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.410593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.410621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.410845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.410874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.411149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.411175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.411386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.411412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.411675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.411703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.411942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.411968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.412202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.412232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.412463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.412488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.412722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.412750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.412978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.413007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.413231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.413256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.413467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.413492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.413701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.413726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.413938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.413980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.414205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.414234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.414492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.414518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.414761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.414789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.415060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.415086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.415455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.415505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.415727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.415754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.415997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.416027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.416287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.416315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.416550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.416578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.416816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.416841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.417102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.417131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.417380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.417409] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.417804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.417857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.418093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.418119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.418384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.418412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.418637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.418662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.418875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.418905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.419131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.419158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.419373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.419403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.419632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.419661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.419911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.419945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.420127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.420152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.420406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.420435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.420665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.420694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.395 qpair failed and we were unable to recover it. 00:22:08.395 [2024-05-15 11:02:24.420962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.395 [2024-05-15 11:02:24.420989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.421175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.421200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.421416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.421442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.421643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.421672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.421891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.421919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.422138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.422164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.422370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.422400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.422597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.422626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.422861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.422888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.423144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.423170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.423419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.423448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.423659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.423688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.423941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.423970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.424197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.424222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.424486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.424513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.424743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.424775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.425084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.425113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.425351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.425376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.425653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.425679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.425904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.425956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.426219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.426247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.426498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.426527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.426788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.426817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.427018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.427047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.427352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.427410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.427641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.427667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.427864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.427889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.428111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.428154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.428470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.428520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.428747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.428772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.429034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.429063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.429303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.396 [2024-05-15 11:02:24.429331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.396 qpair failed and we were unable to recover it. 00:22:08.396 [2024-05-15 11:02:24.429585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.429613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.429850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.429876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.430118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.430148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.430340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.430370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.430598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.430627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.430888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.430913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.431165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.431191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.431431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.431462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.431846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.431904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.432170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.432196] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.432440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.432468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.432686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.432714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.432920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.432959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.433190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.433215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.433427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.433460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.433688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.433718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.433980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.434009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.434211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.434236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.434446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.434476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.434704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.434730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.434904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.434936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.435146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.435172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.435445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.435473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.435701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.435729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.435963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.435992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.436196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.436224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.436464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.436492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.436718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.436746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.436961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.436990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.437198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.437225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.437430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.437455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.397 qpair failed and we were unable to recover it. 00:22:08.397 [2024-05-15 11:02:24.437649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.397 [2024-05-15 11:02:24.437674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.437891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.437916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.438137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.438162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.438372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.438400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.438588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.438617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.438862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.438887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.439133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.439159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.439430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.439456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.439691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.439720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.439936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.439962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.440156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.440186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.440400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.440428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.440681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.440709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.440943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.440976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.441185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.441210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.441428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.441456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.441689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.441717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.441954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.441983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.442224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.442249] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.442457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.442498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.442746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.442775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.443062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.443092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.443351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.443376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.443639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.443670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.443945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.443975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.444211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.444239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.444474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.444499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.444706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.444733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.444939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.444970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.445174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.445203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.445461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.445488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.445690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.445719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.445910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.445958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.446172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.446201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.446435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.446462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.446707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.446733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.446970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.446998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.398 [2024-05-15 11:02:24.447255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.398 [2024-05-15 11:02:24.447284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.398 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.447522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.447547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.447784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.447812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.448013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.448042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.448294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.448362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.448589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.448614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.448850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.448878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.449140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.449170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.449579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.449634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.449862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.449887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.450089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.450115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.450331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.450360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.450675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.450701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.450955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.450981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.451265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.451295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.451503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.451529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.451738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.451763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.451969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.451996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.452214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.452240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.452446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.452476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.452822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.452886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.453152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.453179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.453428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.453459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.453700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.453728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.453963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.453990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.454179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.454205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.454424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.454453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.454681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.454710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.454951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.454981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.455195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.455220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.455433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.455459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.455661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.455687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.455898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.455935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.456168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.456193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.456410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.456438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.399 [2024-05-15 11:02:24.456647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.399 [2024-05-15 11:02:24.456676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.399 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.456908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.456940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.457155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.457180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.457386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.457414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.457648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.457677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.457881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.457909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.458176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.458206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.458452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.458477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.458723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.458751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.458984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.459013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.459255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.459280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.459494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.459523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.459757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.459786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.460046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.460076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.460307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.460333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.460595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.460623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.460852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.460879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.461089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.461118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.461373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.461399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.461635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.461664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.461906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.461938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.462154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.462182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.462439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.462464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.462696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.462725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.462983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.463012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.463212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.463241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.463506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.463532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.463782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.463807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.464072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.464102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.464372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.464398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.464607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.464635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.464880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.464907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.465207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.465235] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.465458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.465506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.465760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.465786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.466004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.466036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.466274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.466300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.466471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.466497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.466710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.466736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.466971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.467001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.467237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.400 [2024-05-15 11:02:24.467267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.400 qpair failed and we were unable to recover it. 00:22:08.400 [2024-05-15 11:02:24.467467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.467496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.467759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.467786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.468030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.468059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.468297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.468322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.468533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.468559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.468768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.468793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.469067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.469096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.469695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.469726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.470009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.470039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.470245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.470272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.470483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.470511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.470723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.470765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.470977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.471008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.471228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.471253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.471522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.471551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.471761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.471789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.472079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.472108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.472345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.472371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.472583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.472609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.472798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.472827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.473062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.473093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.473346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.473372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.473619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.473647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.473925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.473966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.474201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.474230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.474465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.474491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.474731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.474760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.401 qpair failed and we were unable to recover it. 00:22:08.401 [2024-05-15 11:02:24.475000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.401 [2024-05-15 11:02:24.475031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.475299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.475326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.475536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.475563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.475775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.475799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.476016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.476043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.476255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.476284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.476502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.476529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.476724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.476750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.476938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.476964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.477219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.477248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.477474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.477500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.477709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.477737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.477974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.478013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.478243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.478271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.478509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.478535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.478746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.478774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.479001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.479028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.479244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.479272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.479483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.479508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.479720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.479750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.480013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.480040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.480314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.480342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.480541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.480567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.480795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.480838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.481076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.481101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.481411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.481464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.481669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.481695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.481879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.481905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.482140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.482170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.482503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.482553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.482776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.482801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.483070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.483100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.483308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.483337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.483747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.483804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.484039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.484065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.484308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.484336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.484560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.484586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.484816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.484842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.485066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.485092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.485342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.485371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.485636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.485665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.485939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.485969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.402 [2024-05-15 11:02:24.486200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.402 [2024-05-15 11:02:24.486226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.402 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.486481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.486509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.486742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.486768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.487035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.487066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.487315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.487341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.487588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.487618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.487885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.487913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.488160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.488189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.488490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.488516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.488780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.488807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.489061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.489090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.489298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.489328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.489594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.489620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.489831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.489860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.490065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.490095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.490466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.490517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.490754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.490779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.490999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.491026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.491245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.491282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.491700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.491746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.491989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.492016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.492238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.492266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.492525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.492554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.492792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.492822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.493029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.493056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.493302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.493331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.493593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.493619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.493825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.493850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.494102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.494128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.494356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.494385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.494626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.494655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.494881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.494909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.495157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.495184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.495424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.495454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.495658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.495687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.495918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.495957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.496162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.496187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.496430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.496458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.496668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.496696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.496937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.496966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.497202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.497228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.497434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.497463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.497697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.497725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.403 [2024-05-15 11:02:24.497958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.403 [2024-05-15 11:02:24.497989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.403 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.498195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.498220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.498413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.498443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.498744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.498775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.499058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.499087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.499300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.499325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.499497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.499522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.499723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.499752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.499966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.499995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.500294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.500338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.500566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.500594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.500800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.500829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.501055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.501085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.501287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.501313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.501499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.501526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.501727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.501753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.501965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.501995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.502237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.502263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.502480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.502508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.502712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.502741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.502976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.503005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.503230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.503256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.503516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.503545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.503763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.503791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.504047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.504076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.504304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.504329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.504560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.504590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.504818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.504846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.505063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.505093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.505294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.505321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.505541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.505570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.505794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.505819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.506057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.506090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.506305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.506331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.506518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.506544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.506752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.506794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.507012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.507044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.404 [2024-05-15 11:02:24.507257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.404 [2024-05-15 11:02:24.507284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.404 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.507520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.507546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.507743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.507769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.507985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.508014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.508275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.508301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.508529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.508557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.508812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.508850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.509088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.509117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.509386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.509411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.509629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.509658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.509888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.509918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.510195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.510224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.510453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.510480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.510737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.510766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.511021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.511050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.511319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.511347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.511550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.511577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.511841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.511869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.512095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.512124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.512360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.512388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.512620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.512646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.512841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.512869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.513107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.513136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.513413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.513442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.513686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.513712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.513925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.513972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.514232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.514260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.514603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.514663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.514862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.514888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.515109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.515138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.515390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.515415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.515649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.515677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.515943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.515970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.516214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.516247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.516505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.516533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.516790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.516818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.517060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.517086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.517262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.517287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.517494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.517521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.517717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.517747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.517950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.517978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.518233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.405 [2024-05-15 11:02:24.518261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.405 qpair failed and we were unable to recover it. 00:22:08.405 [2024-05-15 11:02:24.518482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.518510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.518740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.518769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.519033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.519059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.519309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.519338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.519565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.519595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.520016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.520046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.520250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.520276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.520511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.520539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.520753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.520782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.521022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.521051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.521283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.521308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.521538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.521566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.521790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.521818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.522080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.522110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.522318] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.522345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.522581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.522609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.522916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.522953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.523211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.523238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.523419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.523448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.523639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.523664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.523874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.523905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.524140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.524167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.524370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.524395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.524701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.524730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.524980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.525010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.525283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.525334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.525531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.525558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.525744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.525770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.526012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.526042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.526296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.526354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.526559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.526584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.526802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.526830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.527072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.527101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.527312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.527340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.527573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.527598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.527787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.527813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.528027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.528060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.528335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.528363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.528577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.528602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.528841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.528872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.406 qpair failed and we were unable to recover it. 00:22:08.406 [2024-05-15 11:02:24.529083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.406 [2024-05-15 11:02:24.529113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.529341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.529370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.529568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.529594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.529805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.529833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.530088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.530115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.530314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.530343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.530562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.530588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.530767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.530792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.531106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.531135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.531490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.531539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.531745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.531769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.532006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.532038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.532248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.532276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.532503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.532532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.532774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.532799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.533002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.533028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.533240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.533268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.533661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.533711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.533968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.533995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.534220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.534246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.534482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.534509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.534741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.534766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.535059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.535103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.535315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.535344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.535577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.535605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.535811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.535839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.536063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.536089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.536337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.536362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.536575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.536603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.536911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.536949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.537249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.537276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.537518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.537546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.537775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.537804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.538019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.538045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.538226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.538251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.538458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.538486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.538721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.538750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.539014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.539044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.539277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.539302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.539513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.539556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.539764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.539794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.540058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.540088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.540296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.540321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.540576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.540605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.540805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.540833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.541103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.541129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.541365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.541394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.407 [2024-05-15 11:02:24.541626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.407 [2024-05-15 11:02:24.541654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.407 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.541913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.541952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.542170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.542199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.542434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.542459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.542726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.542754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.542990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.543019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.543229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.543257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.543491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.543517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.543694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.543720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.544027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.544056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.544333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.544385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.544610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.544636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.544893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.544921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.545193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.545219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.545429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.545455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.545639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.545665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.545874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.545903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.546179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.546208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.546473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.546498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.546696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.546721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.546981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.547008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.547208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.547237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.547499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.547524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.547744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.547769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.547977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.548005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.548229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.548258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.548559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.548616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.548849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.548876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.549092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.549119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.549306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.549332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.549730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.549781] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.550012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.550039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.550260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.550288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.550518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.550544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.550743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.550769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.550964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.550991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.551256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.551285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.551514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.551542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.551793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.551818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.552046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.552072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.552341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.552370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.552564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.552592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.552846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.552875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.553107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.553132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.553368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.553398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.553664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.553692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.553893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.553921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.554190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.554215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.554462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.554491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.408 qpair failed and we were unable to recover it. 00:22:08.408 [2024-05-15 11:02:24.554746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.408 [2024-05-15 11:02:24.554773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.554979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.555009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.555203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.555228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.555489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.555520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.555758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.555791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.556027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.556056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.556269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.556294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.556506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.556547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.556776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.556803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.557035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.557063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.557271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.557296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.557535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.557563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.557798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.557827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.558061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.558090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.558341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.558366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.558606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.558631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.558866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.558891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.559147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.559175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.559370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.559396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.559632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.559660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.559890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.559918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.560187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.560215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.560442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.560468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.560706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.560734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.560973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.560999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.561236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.561264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.561524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.561549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.561809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.561837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.562108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.562137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.562368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.562396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.562629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.562654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.562915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.562951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.563198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.563224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.563454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.563482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.563720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.563745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.563960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.563990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.564255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.564283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.564487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.564513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.564716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.564741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.565019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.565045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.565229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.565254] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.565456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.565481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.565716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.565743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.565977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.566006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.566238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.409 [2024-05-15 11:02:24.566266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.409 qpair failed and we were unable to recover it. 00:22:08.409 [2024-05-15 11:02:24.566491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.566524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.566789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.566814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.567076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.567105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.567337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.567365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.567592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.567621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.567855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.567884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.568065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.568090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.568294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.568324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.568666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.568692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.568902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.568927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.569177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.569206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.569412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.569441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.569697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.569765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.569992] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.570018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.570262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.570290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.570490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.570518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.570851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.570895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.571144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.571170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.571387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.571416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.571651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.571679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.571914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.571949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.572163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.572189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.572394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.572423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.572652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.572681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.572894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.572923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.573226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.573252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.573505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.573530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.573712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.573742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.573981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.574007] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.574192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.574219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.574459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.574488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.574752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.574780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.575056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.575086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.575317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.575342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.575556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.575583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.575891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.575920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.576186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.576212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.576395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.576420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.576618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.576648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.576909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.576944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.577160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.577186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.577429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.577454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.577702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.577731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.577995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.578024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.578260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.578288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.578497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.578523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.578780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.578809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.579058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.579098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.579328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.579357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.579590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.579617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.579895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.579921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.580163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.410 [2024-05-15 11:02:24.580191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.410 qpair failed and we were unable to recover it. 00:22:08.410 [2024-05-15 11:02:24.580461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.580486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.580724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.580750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.580995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.581029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.581240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.581265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.581471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.581499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.581728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.581754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.582004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.582034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.582269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.582299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.582615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.582654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.582923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.582974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.583188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.583214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.583414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.583442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.583669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.583697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.583956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.583982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.584249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.584278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.584519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.584545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.584777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.584803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.585014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.585040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.585332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.585361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.585580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.585608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.585927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.585990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.586224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.586250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.586461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.586489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.586698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.586727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.587003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.587029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.587216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.587242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.587539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.587567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.587767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.587797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.588032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.588062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.588267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.588292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.588513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.588541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.588755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.588783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.589068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.589099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.589416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.589460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.589668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.589697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.589957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.589986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.590225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.590253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.590499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.590525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.590745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.590772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.590994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.591023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.591248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.591309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.591543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.591571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.591853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.591882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.592138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.592164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.592367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.592395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.592655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.592682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.592916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.592952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.593182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.593210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.593419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.593461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.593732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.593759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.594002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.594032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.594227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.594257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.411 [2024-05-15 11:02:24.594507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.411 [2024-05-15 11:02:24.594533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.411 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.594728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.594755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.595023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.595052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.595296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.595325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.595613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.595662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.595916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.595948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.596198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.596225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.596468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.596497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.596764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.596792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.597031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.597057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.597262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.597291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.597492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.597520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.597716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.597745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.597949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.412 [2024-05-15 11:02:24.597977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.412 qpair failed and we were unable to recover it. 00:22:08.412 [2024-05-15 11:02:24.598161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.598187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.598371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.598396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.598606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.598636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.598900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.598927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.599140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.599172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.599388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.599415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.599664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.599693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.599937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.599964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.600178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.600206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.600421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.600451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.600666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.600695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.600945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.687 [2024-05-15 11:02:24.600972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.687 qpair failed and we were unable to recover it. 00:22:08.687 [2024-05-15 11:02:24.601237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.601266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.601469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.601497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.601777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.601829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.602121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.602147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.602392] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.602421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.602654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.602683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.602942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.602987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.603193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.603219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.603453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.603481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.603718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.603744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.603985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.604015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.604220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.604248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.604512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.604538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.604777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.604806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.605008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.605038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.605296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.605321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.605582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.605611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.605872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.605897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.606195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.606224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.606432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.606476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.606722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.606751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.607018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.607047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.607279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.607307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.607563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.607589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.607848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.607873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.608102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.608145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.608372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.608402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.608607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.608634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.608822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.608848] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.609108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.609137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.609409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.609438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.609670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.609696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.609967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.609997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.610237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.688 [2024-05-15 11:02:24.610263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.688 qpair failed and we were unable to recover it. 00:22:08.688 [2024-05-15 11:02:24.610474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.610502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.610708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.610735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.610971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.611001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.611264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.611290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.611505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.611531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.611705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.611730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.611942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.611968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.612204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.612232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.612442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.612473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.612714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.612739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.612956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.612983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.613224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.613253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.613514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.613540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.613760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.613786] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.614031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.614061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.614289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.614316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.614609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.614663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.614915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.614948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.615185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.615210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.615422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.615451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.615660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.615688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.615946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.615972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.616225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.616255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.616514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.616543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.616771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.616797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.617010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.617036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.617308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.617355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.617573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.617604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.617882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.617909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.618130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.618156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.618400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.618431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.618675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.618702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.618988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.619016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.619254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.689 [2024-05-15 11:02:24.619281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.689 qpair failed and we were unable to recover it. 00:22:08.689 [2024-05-15 11:02:24.619573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.619599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.619814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.619843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.620090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.620118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.620354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.620380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.620617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.620646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.620872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.620907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.620960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0a0b0 (9): Bad file descriptor 00:22:08.690 [2024-05-15 11:02:24.621228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.621267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.621513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.621544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.621782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.621811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.622047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.622074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.622308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.622334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.622572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.622601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.622808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.622833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.623040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.623067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.623296] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.623324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.623659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.623713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.623941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.623967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.624149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.624174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.624438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.624472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.624776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.624804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.625055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.625082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.625344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.625373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.625703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.625754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.625958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.626000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.626214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.626240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.626539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.626590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.626829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.626858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.627102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.627129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.627344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.627387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.627639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.627668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.627910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.627955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.628218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.628247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.690 [2024-05-15 11:02:24.628490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.690 [2024-05-15 11:02:24.628519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.690 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.628843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.628871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.629079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.629107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.629371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.629400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.629694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.629744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.630013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.630039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.630231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.630256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.630472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.630514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.630947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.631000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.631211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.631237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.631483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.631511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.631848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.631898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.632135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.632161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.632439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.632465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.632966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.633001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.633228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.633260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.633711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.633759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.633987] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.634014] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.634248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.634276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.634689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.634745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.634986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.635012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.635237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.635265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.635503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.635532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.635763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.635788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.636022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.636051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.636309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.636337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.636571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.636610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.636865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.636895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.637159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.637185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.637394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.637420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.637663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.637691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.637938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.637967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.638203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.638255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.638496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.638525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.638728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.691 [2024-05-15 11:02:24.638756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.691 qpair failed and we were unable to recover it. 00:22:08.691 [2024-05-15 11:02:24.639014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.639041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.639287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.639314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.639530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.639559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.639760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.639785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.640046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.640076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.640321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.640350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.640586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.640612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.640876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.640905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.641145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.641171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.641426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.641452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.641687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.641716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.641922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.641958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.642165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.642200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.642388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.642416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.642627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.642657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.642901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.642927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.643195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.643220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.643420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.643446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.643658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.643684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.643917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.643976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.644208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.644236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.644446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.644472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.644707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.644737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.692 qpair failed and we were unable to recover it. 00:22:08.692 [2024-05-15 11:02:24.644996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.692 [2024-05-15 11:02:24.645026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.645224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.645250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.645481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.645509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.645772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.645800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.646013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.646040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.646251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.646279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.646498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.646526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.646753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.646778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.647042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.647072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.647307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.647336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.647580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.647607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.647815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.647843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.648053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.648083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.648305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.648332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.648574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.648606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.648811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.648839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.649049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.649075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.649314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.649342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.649603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.649631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.649884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.649913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.650161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.650188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.650417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.650445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.650664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.650690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.650873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.650903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.651141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.651168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.651382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.651408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.651616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.651641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.651852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.651877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.652136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.652162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.652431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.652459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.693 [2024-05-15 11:02:24.652689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.693 [2024-05-15 11:02:24.652717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.693 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.652921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.652955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.653196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.653225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.653424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.653453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.653692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.653717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.653937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.653990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.654201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.654231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.654492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.654518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.654744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.654772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.655015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.655044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.655274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.655300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.655538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.655566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.655773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.655798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.656012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.656038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.656273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.656304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.656509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.656537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.656768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.656794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.657005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.657031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.657239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.657264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.657519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.657544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.657734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.657759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.658013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.658039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.658250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.658276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.658509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.658537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.658789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.658817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.659043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.659069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.659334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.659363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.659623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.659651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.659868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.659893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.660113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.660140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.660354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.660380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.660587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.660612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.660818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.660843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.661050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.661080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.661350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.661376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.661565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.661591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.694 qpair failed and we were unable to recover it. 00:22:08.694 [2024-05-15 11:02:24.661801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.694 [2024-05-15 11:02:24.661826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.662052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.662078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.662287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.662316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.662518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.662546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.662754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.662779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.663002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.663032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.663264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.663293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.663541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.663567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.663790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.663819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.664073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.664102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.664339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.664365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.664575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.664603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.664808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.664837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.665097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.665124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.665375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.665403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.665632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.665660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.665889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.665915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.666169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.666207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.666472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.666498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.666674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.666699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.666954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.666987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.667220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.667248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.667513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.667539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.667752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.667778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.668008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.668038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.668239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.668268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.668482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.668511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.668750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.668776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.668986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.669013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.669227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.669253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.669434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.669459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.669774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.669825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.670096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.670123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.670371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.670400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.670656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.670684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.695 qpair failed and we were unable to recover it. 00:22:08.695 [2024-05-15 11:02:24.670952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.695 [2024-05-15 11:02:24.670995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.671224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.671253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.671635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.671665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.671942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.671991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.672227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.672255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.672541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.672566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.672921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.672990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.673216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.673260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.673673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.673726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.674052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.674079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.674312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.674342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.674549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.674575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.674939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.674995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.675235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.675261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.675651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.675701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.675978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.676005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.676199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.676243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.676472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.676507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.676874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.676924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.677143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.677170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.677497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.677540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.677995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.678021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.678259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.678284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.678703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.678754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.679001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.679029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.679223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.679248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.679484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.679510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.679772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.679801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.680034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.680060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.680271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.680296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.680522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.680551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.680815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.680845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.681082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.681107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.681298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.681324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.696 qpair failed and we were unable to recover it. 00:22:08.696 [2024-05-15 11:02:24.681562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.696 [2024-05-15 11:02:24.681588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.681846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.681875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.682115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.682141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.682387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.682413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.682600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.682625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.682875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.682903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.683136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.683163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.683377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.683405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.683809] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.683860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.684119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.684145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.684366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.684400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.684769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.684823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.685064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.685091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.685316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.685343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.685604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.685633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.685888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.685917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.686185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.686228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.686467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.686493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.686769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.686797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.687006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.687032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.687244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.687270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.687517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.687545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.687804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.687833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.688033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.688059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.688269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.688298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.688552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.688581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.688898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.688926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.689164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.689190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.689408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.689437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.689760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.689811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.690031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.690057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.697 qpair failed and we were unable to recover it. 00:22:08.697 [2024-05-15 11:02:24.690272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.697 [2024-05-15 11:02:24.690298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.690727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.690783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.691027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.691053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.691297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.691328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.691709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.691766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.692024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.692051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.692284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.692313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.692573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.692602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.692831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.692860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.693100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.693127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.693348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.693374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.693629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.693658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.693892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.693920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.694175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.694201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.694434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.694465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.694698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.694726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.695037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.695065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.695273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.695299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.695485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.695511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.695759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.695788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.696063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.696089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.696331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.696361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.696747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.696814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.697078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.697104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.697295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.697320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.697625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.697654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.698 qpair failed and we were unable to recover it. 00:22:08.698 [2024-05-15 11:02:24.697957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.698 [2024-05-15 11:02:24.697999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.698203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.698229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.698493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.698522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.698727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.698755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.698979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.699006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.699217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.699251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.699485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.699514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.699745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.699774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.699988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.700015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.700197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.700222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.700406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.700432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.700676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.700706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.700967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.700994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.701204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.701230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.701549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.701605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.701851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.701879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.702105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.702132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.702509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.702566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.702763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.702790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.703028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.703055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.703245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.703281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.703512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.703545] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.703744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.703773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.704020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.704046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.704283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.704311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.704519] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.704547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.704772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.704802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.705029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.705056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.705237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.705262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.705526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.705557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.705776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.705804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.706016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.706042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.706261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.706308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.706569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.706617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.706848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.699 [2024-05-15 11:02:24.706892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.699 qpair failed and we were unable to recover it. 00:22:08.699 [2024-05-15 11:02:24.707105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.707133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.707408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.707463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.707740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.707783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.708008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.708035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.708298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.708341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.708561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.708604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.708787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.708814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.709025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.709069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.709309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.709353] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.709682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.709725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.709944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.709971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.710182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.710218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.710472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.710515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.710762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.710806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.710996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.711023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.711238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.711282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.711526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.711570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.711789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.711834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.712044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.712070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.712342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.712384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.712598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.712624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.712874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.712901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.713147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.713191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.713484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.713528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.713802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.713849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.714099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.714126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.714416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.714473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.714689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.714742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.714922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.714955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.715242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.715286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.715552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.715596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.715877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.715919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.716200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.716243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.716489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.716531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.716827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.716873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.717122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.700 [2024-05-15 11:02:24.717149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.700 qpair failed and we were unable to recover it. 00:22:08.700 [2024-05-15 11:02:24.717390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.717435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.717681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.717733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.717958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.717992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.718231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.718274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.718583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.718631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.718875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.718901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.719126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.719153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.719395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.719438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.719652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.719695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.719867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.719893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.720126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.720153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.720391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.720434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.720815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.720877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.721142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.721186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.721481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.721524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.721820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.721864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.722196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.722239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.722515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.722559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.722752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.722779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.722989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.723034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.723271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.723314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.723563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.723607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.723794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.723820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.724057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.724101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.724405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.724447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.724705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.724747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.725004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.725030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.725260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.725301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.725527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.725570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.725834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.725877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.726094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.726122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.726361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.726406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.726653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.726705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.701 [2024-05-15 11:02:24.726889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.701 [2024-05-15 11:02:24.726915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.701 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.727152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.727178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.727437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.727480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.727853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.727899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.728131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.728158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.728428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.728470] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.728880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.728948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.729182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.729208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.729484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.729528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.729905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.729988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.730182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.730208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.730435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.730481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.730753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.730798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.731041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.731086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.731322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.731365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.731605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.731649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.731853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.731879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.732127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.732171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.732431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.732474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.732715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.732759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.733000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.733045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.733276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.733320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.733555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.733600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.733825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.733851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.734128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.734177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.734417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.734463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.734706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.734750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.734959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.734996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.735235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.735278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.735485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.735528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.735772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.735816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.736054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.736081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.736291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.736335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.736575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.736620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.736829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.736855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.702 qpair failed and we were unable to recover it. 00:22:08.702 [2024-05-15 11:02:24.737068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.702 [2024-05-15 11:02:24.737112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.737385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.737415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.737683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.737727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.737991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.738018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.738287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.738332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.738580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.738623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.738907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.738954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.739167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.739193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.739445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.739489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.739829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.739873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.740091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.740118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.740361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.740411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.740645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.740688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.740909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.740942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.741161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.741189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.741448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.741492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.741702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.741746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.742000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.742030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.742258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.742300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.742569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.742612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.742849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.742876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.743059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.743087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.743361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.743391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.743662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.743705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.743941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.743967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.744242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.703 [2024-05-15 11:02:24.744271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.703 qpair failed and we were unable to recover it. 00:22:08.703 [2024-05-15 11:02:24.744558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.744601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.744841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.744885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.745117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.745144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.745385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.745433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.745644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.745687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.745904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.745948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.746134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.746162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.746437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.746481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.746702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.746746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.746968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.746996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.747526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.747557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.747801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.747828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.748060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.748104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.748377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.748406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.748641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.748684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.748894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.748920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.749149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.749193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.749439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.749483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.749704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.749748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.749936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.749963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.750148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.750175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.750421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.750464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.750674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.750717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.750926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.750961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.751173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.751200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.751435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.751478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.751693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.751735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.751959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.751987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.752165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.752192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.752446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.752493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.752712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.752757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.752978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.753005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.753190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.753217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.753468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.753510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.753746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.753775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.754051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.754079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.754290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.754334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.754568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.754612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.754838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.704 [2024-05-15 11:02:24.754864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.704 qpair failed and we were unable to recover it. 00:22:08.704 [2024-05-15 11:02:24.755051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.755078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.755301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.755345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.755612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.755655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.755835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.755862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.756074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.756104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.756353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.756396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.756627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.756670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.756920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.756951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.757199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.757243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.757502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.757529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.757797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.757840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.758045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.758072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.758314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.758358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.758627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.758656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.758909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.758941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.759160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.759186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.759405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.759433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.759683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.759726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.759961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.759989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.760168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.760193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.760471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.760514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.760757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.760800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.761052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.761096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.761329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.761373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.761610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.761654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.761953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.761981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.762189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.762233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.762509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.762552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.762769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.762811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.763022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.763050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.763269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.763312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.763578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.763620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.763836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.763862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.764050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.764077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.764294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.764338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.764581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.764624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.764833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.764858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.765058] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.765085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.705 qpair failed and we were unable to recover it. 00:22:08.705 [2024-05-15 11:02:24.765302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.705 [2024-05-15 11:02:24.765346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.765556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.765583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.765831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.765857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.766096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.766141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.766383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.766427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.766663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.766706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.766949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.766980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.767196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.767222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.767455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.767497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.767742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.767785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.767991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.768018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.768262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.768307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.768554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.768597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.768859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.768908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.769128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.769154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.769435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.769479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.769694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.769736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.769936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.769963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.770172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.770199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.770443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.770488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.770739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.770782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.771049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.771076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.771288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.771317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.771599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.771642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.771855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.771881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.772095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.772122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.772368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.772411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.772681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.772724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.772960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.772987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.773169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.773194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.773440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.773483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.773689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.773733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.773976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.774005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.774265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.774309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.774544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.774587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.774805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.774832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.775096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.775140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.775383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.775426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.775660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.706 [2024-05-15 11:02:24.775703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.706 qpair failed and we were unable to recover it. 00:22:08.706 [2024-05-15 11:02:24.775919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.775965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.776206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.776233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.776449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.776493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.776727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.776770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.776979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.777006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.777220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.777247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.777492] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.777534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.777739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.777788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.777991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.778019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.778284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.778327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.778564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.778608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.778795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.778820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.779030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.779057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.779298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.779341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.779577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.779620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.779832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.779858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.780073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.780118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.780363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.780406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.780676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.780719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.780908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.780942] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.781158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.781183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.781431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.781475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.781709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.781753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.781989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.782016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.782198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.782223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.782476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.782520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.782750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.782792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.783034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.783067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.783299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.783341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.783573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.783616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.783825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.783851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.784062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.784089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.784370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.784414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.784626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.784669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.784907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.784939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.785132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.785158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.707 qpair failed and we were unable to recover it. 00:22:08.707 [2024-05-15 11:02:24.785423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.707 [2024-05-15 11:02:24.785465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.785647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.785674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.785881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.785908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.786172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.786200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.786414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.786458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.786670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.786712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.786924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.786976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.787218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.787245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.787477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.787520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.787758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.787803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.788003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.788030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.788273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.788322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.788540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.788582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.788787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.788813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.789020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.789063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.789297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.789339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.789571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.789614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.789826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.789852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.790087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.790131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.790377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.790404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.790646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.790689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.790897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.790923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.791180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.791228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.791499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.791542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.791824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.791868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.792218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.792262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.792542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.792585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.792908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.792976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.793191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.793217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.793488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.793531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.793780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.793824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.794040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.794081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.794362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.794408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.794713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.708 [2024-05-15 11:02:24.794756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.708 qpair failed and we were unable to recover it. 00:22:08.708 [2024-05-15 11:02:24.795003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.795047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.795311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.795354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.795636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.795679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.795915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.795964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.796216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.796245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.796546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.796589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.796962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.797022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.797287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.797313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.797514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.797557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.797802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.797845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.798062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.798089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.798365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.798393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.798684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.798726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.798956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.798982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.799186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.799212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.799407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.799450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.799696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.799739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.799954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.799985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.800233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.800258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.800493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.800536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.800757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.800800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.801037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.801064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.801272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.801315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.801663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.801706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.801921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.801955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.802167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.802193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.802413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.802457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.802697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.802741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.802991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.803017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.803270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.803314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.803583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.803627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.803910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.803961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.804245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.804287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.804521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.804565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.804820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.804862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.805111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.805137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.709 [2024-05-15 11:02:24.805348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.709 [2024-05-15 11:02:24.805391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.709 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.805617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.805661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.805904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.805935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.806187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.806230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.806501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.806544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2891593 Killed "${NVMF_APP[@]}" "$@" 00:22:08.710 [2024-05-15 11:02:24.806782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.806827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.807027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.807054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.807269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.807312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:22:08.710 [2024-05-15 11:02:24.807560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.807605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:22:08.710 [2024-05-15 11:02:24.807822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.807849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:08.710 [2024-05-15 11:02:24.808067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.808095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:08.710 [2024-05-15 11:02:24.808299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.808344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.808551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.710 [2024-05-15 11:02:24.808595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.808793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.808819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.809066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.809111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.809390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.809420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.809667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.809710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.809936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.809963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.810145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.810171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.810419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.810448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.810705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.810748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.810964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.810991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.811225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.811267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.811482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.811526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.811792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.811836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.812122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.812167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 [2024-05-15 11:02:24.812533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.812587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2892148 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2892148 00:22:08.710 [2024-05-15 11:02:24.812819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.812846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@827 -- # '[' -z 2892148 ']' 00:22:08.710 [2024-05-15 11:02:24.813074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.813119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:08.710 [2024-05-15 11:02:24.813319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.813367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:08.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:08.710 [2024-05-15 11:02:24.813610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.813655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.710 11:02:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:08.710 [2024-05-15 11:02:24.813900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.710 [2024-05-15 11:02:24.813927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.710 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.814203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.814251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.814528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.814572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.814751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.814779] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.814986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.815011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.815243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.815288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.815569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.815620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.815830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.815857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.816070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.816097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.816312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.816356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.816633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.816677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.816888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.816914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.817144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.817171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.817408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.817451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.817673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.817717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.817941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.817968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.818173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.818199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.818415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.818459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.818673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.818716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.818926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.818961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.819204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.819231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.819510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.819553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.819817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.819860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.820100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.820131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.820368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.820410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.820658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.820702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.820945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.820971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.821185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.821211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.821424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.821467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.821686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.821729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.821955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.821983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.822190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.822217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.822483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.822525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.822732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.822776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.822990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.823017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.823197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.823224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.711 qpair failed and we were unable to recover it. 00:22:08.711 [2024-05-15 11:02:24.823453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.711 [2024-05-15 11:02:24.823496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.823707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.823752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.823962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.823990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.824231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.824276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.824460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.824488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.824732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.824759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.824946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.824973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.825212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.825256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.825500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.825544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.825780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.825823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.826086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.826132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.826337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.826380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.826600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.826644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.826848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.826875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.827119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.827164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.827381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.827422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.827708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.827752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.827941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.827968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.828215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.828241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.828476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.828521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.828748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.828793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.828972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.828999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.829236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.829278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.829483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.829527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.829778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.829807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.830138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.830182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.830405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.830448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.830679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.830726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.830953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.830980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.831170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.831197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.831405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.831451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.831666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.831710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.831893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.712 [2024-05-15 11:02:24.831919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.712 qpair failed and we were unable to recover it. 00:22:08.712 [2024-05-15 11:02:24.832106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.832133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.832340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.832370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.832614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.832657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.832868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.832894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.833110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.833137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.833350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.833393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.833601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.833644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.833858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.833885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.834091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.834117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.834323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.834366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.834630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.834673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.834876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.834902] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.835087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.835113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.835350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.835393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.835624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.835669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.835859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.835885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.836080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.836107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.836370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.836412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.836620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.836664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.836900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.836926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.837123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.837150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.837385] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.837429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.837691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.837734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.838014] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.838044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.838285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.838312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.838569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.838599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.838831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.838860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.839094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.839120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.839362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.839390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.839702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.839732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.839981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.840009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.840197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.840223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.840468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.840497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.840759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.840807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.841056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.841084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.841326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.841356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.841639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.841686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.841923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.841976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.713 [2024-05-15 11:02:24.842177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.713 [2024-05-15 11:02:24.842204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.713 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.842518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.842550] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.843002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.843030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.843241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.843269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.843494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.843523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.843719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.843748] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.843994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.844020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.844251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.844290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.844536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.844581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.844819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.844849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.845052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.845085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.845298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.845347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.845615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.845658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.845864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.845891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.846110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.846138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.846352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.846396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.846611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.846656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.846847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.846873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.847119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.847162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.847431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.847491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.847730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.847775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.847973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.848000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.848214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.848262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.848478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.848521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.848718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.848745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.848942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.848969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.849242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.849286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.849508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.849553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.849952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.850002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.850187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.850213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.850437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.850481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.850720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.850768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.850995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.851022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.851261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.851305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.851539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.851583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.851767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.851793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.851996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.852042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.852320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.852364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.852630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.852673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.852911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.852943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.853131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.853157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.853429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.853473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.853727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.853771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.853956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.853984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.854257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.714 [2024-05-15 11:02:24.854300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.714 qpair failed and we were unable to recover it. 00:22:08.714 [2024-05-15 11:02:24.854574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.854618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.854861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.854887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.855105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.855132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.855370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.855412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.855668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.855712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.855917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.855956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.856196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.856237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.856486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.856529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.856764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.856807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.856991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.857018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.857258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.857301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.857539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.857582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.857830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.857874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.858089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.858117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.858337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.858380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.858632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.858676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.858883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.858909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.859135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.859161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.859396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.859439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.859690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.859734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.859970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.859969] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:22:08.715 [2024-05-15 11:02:24.859999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.860047] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:08.715 [2024-05-15 11:02:24.860267] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.860325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.860569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.860599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.860829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.860856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.861036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.861064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.861352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.861396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.861614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.861658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.861869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.861895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.862095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.862121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.862330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.862375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.862645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.862689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.862905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.862938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.863180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.863210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.863447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.863492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.863715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.863758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.863994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.864020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.864294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.864342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.864581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.864625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.864863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.864890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.865089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.865117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.865356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.865401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.865639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.865682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.865894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.715 [2024-05-15 11:02:24.865921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.715 qpair failed and we were unable to recover it. 00:22:08.715 [2024-05-15 11:02:24.866153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.866183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.866486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.866542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.866819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.866850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.867087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.867116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.867407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.867436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.867643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.867673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.867879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.867909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.868149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.868177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.868446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.868490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.868699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.868743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.868955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.868981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.869157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.869183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.869401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.869444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.869723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.869768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.869974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.870005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.870195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.870222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.870458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.870487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.870733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.870777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.870995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.871024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.871247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.871292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.871502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.871547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.871733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.871759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.871973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.872000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.872234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.872263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.872555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.872599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.872877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.872920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.873168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.873195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.873441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.873483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.873767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.873809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.873996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.874023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.874256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.874308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.874570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.874616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.874801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.874828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.875069] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.875097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.875332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.875376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.875614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.875643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.875870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.875901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.876149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.716 [2024-05-15 11:02:24.876193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.716 qpair failed and we were unable to recover it. 00:22:08.716 [2024-05-15 11:02:24.876437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.876481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.876710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.876753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.876963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.876991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.877246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.877294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.877527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.877571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.877816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.877860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.878056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.878083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.878316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.878360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.878577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.878620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.878810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.878837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.879100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.879144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.879384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.879426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.879634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.879679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.879902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.879928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.880174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.880217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.880437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.880481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.880723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.880767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.880999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.881043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.881284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.881329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.881565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.881607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.881788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.881815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.882085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.882129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.882396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.882439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.882625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.882652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.882853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.882880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.883086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.883132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.883350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.883393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.883646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.883690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.883901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.883927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.884182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.884212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.884470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.884515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.884743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.884787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.885046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.885074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.885319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.885362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.885634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.885678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.885911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.885944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.886186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.886212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.886428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.886473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.886763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.886810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.717 qpair failed and we were unable to recover it. 00:22:08.717 [2024-05-15 11:02:24.887019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.717 [2024-05-15 11:02:24.887045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.887282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.887326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.887547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.887591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.887764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.887791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.887980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.888013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.888240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.888268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.888513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.888556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.888801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.888827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.889057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.889102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.889341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.889384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.889630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.889674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.889888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.889914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.890167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.890210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.890477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.890521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.890786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.890829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.891043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.891069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.891280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.891322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.891594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.891638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.891886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.891913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.892163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.892208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.892424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.892467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.892708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.892751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.892986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.893013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.893228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.893270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.893505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.893549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.893782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.893826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.894020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.894047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.894279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.894324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.894553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.894596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.894772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.894797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.895017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.895061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.895311] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.895355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.895562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.895605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.895844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.895870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.896110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.896154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.896380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.896423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.896683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.896725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.896961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.896988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.897269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.897312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.897560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.897588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.897824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.897850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.718 [2024-05-15 11:02:24.898047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.718 [2024-05-15 11:02:24.898074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.718 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.898316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.898360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.898627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.898656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.898918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.898957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.899226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.899270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.899541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.899583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.899846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.899890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.900136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.900163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.900400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.900444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.900679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.900724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.900939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.900967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.901184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.901227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.901462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.901507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.901749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.901793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.902026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.902069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.902276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.902318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.902562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.902606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.902797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:08.719 [2024-05-15 11:02:24.902824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:08.719 qpair failed and we were unable to recover it. 00:22:08.719 [2024-05-15 11:02:24.903072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.903118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.903327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.903371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.903603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.903648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.903861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.903887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.006 [2024-05-15 11:02:24.904098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.904144] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.904390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.904432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.904613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.904639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.904823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.904849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.905066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.905111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.905325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.905369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.905579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.905606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.905817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.905842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.906066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.906111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.906346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.906376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.906608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.906651] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.906890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.906915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.907165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.907211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.907422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.907464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.907656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.907683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.907881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.907908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.908120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.908146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.908358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.908384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.908570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.908596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.908833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.908859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.909073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.909100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.909316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.909343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.909531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.909557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.909800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.909827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.910049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.910076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.910294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.910320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.910530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.910557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.910765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.910790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.910969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.910996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.911212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.911238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.911422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.911449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.911679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.911705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.911917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.911960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.912141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.912167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.912346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.912377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.912572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.912598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.912778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.912805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.912983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.913009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.006 qpair failed and we were unable to recover it. 00:22:09.006 [2024-05-15 11:02:24.913244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.006 [2024-05-15 11:02:24.913270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.913452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.913478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.913660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.913686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.913891] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.913917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.914119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.914146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.914367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.914393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.914628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.914654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.914831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.914856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.915062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.915089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.915271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.915297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.915513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.915539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.915720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.915746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.915977] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.916004] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.916221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.916247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.916458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.916484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.916711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.916738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.916962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.916989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.917173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.917200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.917412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.917439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.917647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.917674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.917887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.917913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.918152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.918191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.918457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.918484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.918701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.918728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.918911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.918945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.919157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.919183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.919398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.919424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.919599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.919625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.919820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.919846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.920052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.920080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.920275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.920301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.920537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.920563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.920745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.920771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.920991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.921018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.921206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.921233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.921440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.921466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.921673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.921703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.921924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.921960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.922179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.922205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.922386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.922412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.922592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.922618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.922822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.922847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.923051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.923078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.923291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.923317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.923518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.923544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.923801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.923827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.924017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.924044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.924235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.924263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.924497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.924524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.924700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.007 [2024-05-15 11:02:24.924727] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.007 qpair failed and we were unable to recover it. 00:22:09.007 [2024-05-15 11:02:24.924921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.924953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.925189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.925215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.925428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.925454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.925675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.925701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.925952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.925978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.926186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.926212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.926415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.926440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.926657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.926683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.926896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.926922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.927165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.927191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.927376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.927402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.927582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.927608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.927789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.927816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.928053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.928081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.928273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.928299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.928534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.928559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.928749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.928774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.928962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.928989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.929186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.929212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.929428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.929454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.929665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.929692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.929873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.929899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.930141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.930167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.930364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.930390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.930602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.930629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.930860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.930886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.931112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.931142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.931343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.931368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.931570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.931595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.931819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.931846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.932064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.932091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.932272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.932299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.932517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.932543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.932733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.932760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.932963] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.932989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.933197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.933222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.933413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.933439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.933692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.933718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.933902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.933934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.934177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.934203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.934421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.934448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.934661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.934687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.934892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.934917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.935135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.935161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.935381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.935406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.935596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.935622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.935895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.935921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.936149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.936176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.936394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.936421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.936602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.936628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.936842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.936868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.937081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.937108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.008 qpair failed and we were unable to recover it. 00:22:09.008 [2024-05-15 11:02:24.937298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.008 [2024-05-15 11:02:24.937324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.937540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.937567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.937763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.937788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.937998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.938024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.938254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.938282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.938497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.938522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.938731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.938757] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.938993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.939019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.939200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.939226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.939401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.939427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.939663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.939689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.939872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.939899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.940087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.940114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.940331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.940357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.940561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.940587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.940766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.940792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.941000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.941026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.941237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.941263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.941443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.941469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.941681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.941709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.941922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.941954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.942162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.942188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.942404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.942430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.942629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.942654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.942870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.942895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.943124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.943151] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.943355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.943381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.943565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.943592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.943814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.943840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.944060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.944087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.944295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.944321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.944551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.944578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.944778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.944804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.944968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:09.009 [2024-05-15 11:02:24.945010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.945036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.945215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.945242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.945448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.945473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.945711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.945737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.945946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.945973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.946182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.946208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.946409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.946435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.946616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.946643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.946848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.946874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.947125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.947153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.947425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.947451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.947683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.947709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.947918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.947956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.948177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.948204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.948398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.948425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.009 [2024-05-15 11:02:24.948668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.009 [2024-05-15 11:02:24.948694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.009 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.948889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.948915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.949231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.949272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.949581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.949607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.949786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.949813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.949985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.950012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.950195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.950226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.950489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.950515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.950727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.950753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.950964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.950992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.951232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.951259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.951475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.951501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.951706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.951733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.951935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.951967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.952249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.952275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.952494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.952522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.952709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.952735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.952976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.953003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.953320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.953349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.953565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.953593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.953793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.953820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.954031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.954059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.954381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.954410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.954620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.954647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.954861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.954887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.955081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.955108] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.955319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.955346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.955539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.955564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.955792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.955818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.956039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.956067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.956248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.956275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.956648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.956686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.956971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.956998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.957215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.957241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.957428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.957454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.957667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.957693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.957880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.957906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.958128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.958155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.958370] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.958397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.958593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.958618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.958890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.958916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.959135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.959161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.959374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.959402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.959593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.959619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.959832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.959859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.960071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.960098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.960314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.960344] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.960551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.960577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.960816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.960842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.961051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.961077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.961291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.010 [2024-05-15 11:02:24.961318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.010 qpair failed and we were unable to recover it. 00:22:09.010 [2024-05-15 11:02:24.961502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.961528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.961734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.961760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.961994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.962021] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.962252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.962279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.962489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.962515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.962747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.962773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.963008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.963035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.963222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.963250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.963469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.963495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.963689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.963716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.963944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.963974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.964168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.964195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.964418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.964444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.964673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.964698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.964873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.964899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.965119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.965157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.965347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.965373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.965584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.965626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.965830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.965856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.966037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.966064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.966301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.966327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.966537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.966564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.966779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.966805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.966998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.967025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.967237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.967264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.967474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.967500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.967713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.967754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.967958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.967989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.968229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.968256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.968463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.968490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.968704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.968730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.968949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.968977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.969162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.969188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.969396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.969423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.969659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.969685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.969899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.969949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.970181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.970207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.970389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.970416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.970668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.970694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.970883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.970909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.971097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.971124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.971314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.971340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.971527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.971554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.971791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.971818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.972005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.972033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.972282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.972308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.972511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.972537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.972776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.972802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.973015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.973042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.973309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.973335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.973541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.973566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.973784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.973810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.973991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.974018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.974204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.974231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.974437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.974463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.974677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.974707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.974918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.974951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.975126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.975152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.975361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.975387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.975624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.975667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.975879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.975907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.976136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.976163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.011 qpair failed and we were unable to recover it. 00:22:09.011 [2024-05-15 11:02:24.976405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.011 [2024-05-15 11:02:24.976431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.976603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.976629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.976814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.976842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.977060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.977087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.977327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.977352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.977533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.977559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.977737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.977763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.977951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.977977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.978182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.978208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.978419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.978445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.978629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.978655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.978858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.978883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.979134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.979161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.979454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.979480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.979718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.979745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.979975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.980002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.980195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.980221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.980404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.980432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.980618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.980645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.980888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.980914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.981110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.981136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.981371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.981396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.981578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.981604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.981785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.981812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.982065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.982092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.982309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.982335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.982549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.982574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.982783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.982809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.983038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.983064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.983246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.983272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.983443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.983468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.983679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.983705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.983892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.983918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.984145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.984172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.984378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.984403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.984584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.984610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.984796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.984823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.985012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.985038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.985225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.985250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.985457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.985483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.985685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.985711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.985919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.985953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.986130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.986157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.986347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.986373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.986582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.986607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.986849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.986874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.987071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.987097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.987277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.987302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.987514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.987540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.987750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.987776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.987983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.988009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.988182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.988208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.988386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.988412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.988598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.988624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.988825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.988856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.989093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.989120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.989326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.989351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.989538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.989564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.012 [2024-05-15 11:02:24.989748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.012 [2024-05-15 11:02:24.989773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.012 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.989954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.989981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.990191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.990217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.990446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.990472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.990667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.990692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.990917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.990950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.991163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.991188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.991426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.991452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.991662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.991688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.991870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.991898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.992123] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.992149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.992328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.992354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.992558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.992584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.992767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.992792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.992997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.993024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.993259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.993285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.993490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.993516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.993699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.993726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.993938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.993965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.994176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.994202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.994407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.994433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.994643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.994669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.994861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.994886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.995102] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.995136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.995322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.995348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.995535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.995560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.995742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.995768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.995981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.996009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.996192] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.996218] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.996426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.996452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.996654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.996679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.996861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.996887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.997150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.997177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.997415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.997441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.997661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.997687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.997894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.997920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.998108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.998134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.998325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.998351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.998561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.998587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.998788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.998814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.999025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.999052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.999271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.999296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.999504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.999531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.999740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.999767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:24.999970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:24.999997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.000181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.000207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.000420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.000446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.000661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.000686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.000893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.000919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.001106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.001132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.001346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.001371] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.001581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.001606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.001807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.001832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.002083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.002110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.002317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.002342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.002580] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.002606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.002784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.002809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.003024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.003051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.003266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.003292] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.003527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.003553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.003764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.003789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.004032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.004058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.004241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.004266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.004454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.004479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.004661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.004686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.013 [2024-05-15 11:02:25.004868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.013 [2024-05-15 11:02:25.004894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.013 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.005097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.005125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.005312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.005340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.005557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.005583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.005765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.005790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.006003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.006030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.006212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.006237] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.006445] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.006471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.006651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.006675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.006883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.006909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.007089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.007115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.007328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.007356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.007543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.007568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.007753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.007778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.007996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.008022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.008203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.008228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.008457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.008483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.008692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.008718] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.008896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.008921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.009159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.009187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.009378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.009405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.009635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.009661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.009849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.009875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.010062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.010088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.010312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.010338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.010517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.010543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.010779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.010809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.011022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.011049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.011232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.011260] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.011472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.011498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.011734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.011760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.011979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.012006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.012222] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.012247] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.012460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.012486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.012666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.012691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.012907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.012938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.013118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.013143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.013331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.013357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.013564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.013590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.013777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.013803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.014002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.014027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.014208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.014234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.014418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.014443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.014690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.014716] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.014908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.014940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.015124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.015150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.015373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.015398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.015609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.015635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.015818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.015844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.016040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.016066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.016299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.016324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.016509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.016534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.016782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.016806] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.017030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.017061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.017243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.017270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.017481] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.017508] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.017722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.017749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.017936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.017964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.018175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.018201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.018386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.018412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.018598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.018623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.018834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.018860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.019043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.019070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.019323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.019349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.019555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.019580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.019796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.019820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.020037] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.020063] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.020306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.020331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.014 [2024-05-15 11:02:25.020543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.014 [2024-05-15 11:02:25.020568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.014 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.020755] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.020782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.021007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.021033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.021250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.021275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.021507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.021532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.021709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.021734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.021920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.021951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.022156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.022181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.022381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.022407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.022620] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.022645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.022830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.022855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.023042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.023069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.023256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.023286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.023463] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.023489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.023664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.023689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.023924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.023957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.024147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.024172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.024381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.024406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.024585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.024610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.024845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.024870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.025077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.025104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.025289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.025317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.025555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.025581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.025815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.025841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.026053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.026079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.026269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.026295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.026482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.026509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.026698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.026725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.026939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.026965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.027182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.027207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.027443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.027468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.027649] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.027674] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.027881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.027906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.028095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.028121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.028301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.028327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.028510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.028537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.028746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.028771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.029005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.029030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.029205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.029230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.029439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.029465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.029681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.029707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.029882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.029908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.030097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.030124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.030335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.030360] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.030565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.030590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.030777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.030802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.031013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.031038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.031247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.031272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.031457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.031482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.031720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.031745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.031976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.032002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.032193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.032219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.032430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.032455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.032675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.032701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.032903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.032945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.033133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.033159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.033393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.033417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.033632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.033659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.033838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.033862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.034049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.015 [2024-05-15 11:02:25.034075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.015 qpair failed and we were unable to recover it. 00:22:09.015 [2024-05-15 11:02:25.034264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.034290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.034499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.034524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.034703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.034728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.034910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.034941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.035154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.035181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.035362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.035389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.035598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.035624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.035838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.035864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.036103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.036129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.036321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.036347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.036535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.036561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.036811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.036837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.037078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.037104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.037304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.037330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.037538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.037564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.037798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.037824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.038072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.038098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.038277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.038301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.038537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.038562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.038773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.038797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.038998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.039028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.039211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.039236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.039437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.039463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.039639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.039665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.039868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.039895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.040112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.040138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.040354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.040378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.040612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.040637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.040810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.040834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.041039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.041065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.041268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.041293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.041470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.041496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.041704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.041731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.041982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.042009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.042240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.042265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.042470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.042495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.042672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.042698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.042888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.042912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.043118] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.043143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.043355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.043380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.043563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.043588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.043766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.043791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.044005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.044031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.044241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.044267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.044476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.044502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.044687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.044712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.044886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.044911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.045106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.045136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.045374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.045399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.045634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.045659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.045885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.045910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.046122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.046147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.046360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.046385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.046617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.046642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.046832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.046857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.047066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.047092] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.047278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.047303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.047512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.047537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.047746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.047772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.047957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.047984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.048179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.048205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.048397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.048423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.048610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.016 [2024-05-15 11:02:25.048635] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.016 qpair failed and we were unable to recover it. 00:22:09.016 [2024-05-15 11:02:25.048818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.048844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.049067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.049094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.049281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.049306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.049513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.049539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.049740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.049765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.049956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.049983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.050169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.050195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.050444] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.050468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.050654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.050678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.050858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.050882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.051068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.051096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.051304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.051330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.051515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.051541] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.051758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.051783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.051967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.051996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.052212] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.052236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.052447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.052471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.052692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.052717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.052925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.052958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.053161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.053188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.053378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.053406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.053593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.053618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.053808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.053833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.054017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.054042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.054232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.054258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.054449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.054475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.054680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.054706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.054880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.054905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.055132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.055158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.055367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.055392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.055601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.055626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.055866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.055891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.056108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.056133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.056320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.056345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.056523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.056549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.056783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.056808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.057041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.057067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.057277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.057302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.057510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.057535] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.057746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.057772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.057965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.058001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.058190] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.058215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.058402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.058426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.058666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.058690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.058919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.058948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.059176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.059200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.059387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.059413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.059618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.059643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.059828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.059853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.060068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.060094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.060272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.060297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.060478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.060502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.060679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.060709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.060908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.060941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.061134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.061159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.061346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.061372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.061554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.061579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.061785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.061809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.061896] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.017 [2024-05-15 11:02:25.061940] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.017 [2024-05-15 11:02:25.061957] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.017 [2024-05-15 11:02:25.061969] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.017 [2024-05-15 11:02:25.061987] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.017 [2024-05-15 11:02:25.061991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.062015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.062062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:09.017 [2024-05-15 11:02:25.062193] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.062217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.062214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:09.017 [2024-05-15 11:02:25.062262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:09.017 [2024-05-15 11:02:25.062265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:09.017 [2024-05-15 11:02:25.062406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.062432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.062622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.062648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.062823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.062849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.017 qpair failed and we were unable to recover it. 00:22:09.017 [2024-05-15 11:02:25.063060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.017 [2024-05-15 11:02:25.063086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.063293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.063318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.063521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.063546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.063733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.063758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.063968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.063998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.064178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.064203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.064409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.064434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.064660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.064685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.064871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.064896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.065098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.065124] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.065323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.065347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.065520] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.065549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.065735] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.065760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.065983] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.066010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.066196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.066234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.066414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.066440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.066628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.066656] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.066847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.066874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.067059] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.067085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.067272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.067298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.067487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.067512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.067694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.067719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.067910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.067940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.068137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.068163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.068354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.068379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.068564] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.068590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.068764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.068790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.068988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.069013] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.069184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.069209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.069404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.069429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.069615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.069641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.069816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.069842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.070049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.070076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.070260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.070294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.070468] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.070493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.070679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.070705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.070880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.070905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.071099] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.071125] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.071367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.071393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.071572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.071597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.071803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.071834] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.072187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.072216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.072488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.072514] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.072699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.072725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.072941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.072967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.073140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.073165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.073371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.073397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.073595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.073620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.073804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.073828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.074043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.074070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.074286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.074313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.074499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.074524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.074711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.074736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.074916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.074947] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.075137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.075162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.075345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.075373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.075590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.075616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.075801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.075827] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.076032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.076076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.076306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.076335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.076527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.076554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.076736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.076762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.076954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.076982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.018 qpair failed and we were unable to recover it. 00:22:09.018 [2024-05-15 11:02:25.077194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.018 [2024-05-15 11:02:25.077220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.077403] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.077429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.077607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.077633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.077850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.077875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.078064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.078097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.078308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.078334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.078521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.078546] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.078727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.078753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.078971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.078998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.079204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.079229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.079440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.079466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.079677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.079703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.079919] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.079972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.080157] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.080184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.080383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.080411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.080590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.080615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.080828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.080858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.081047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.081074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.081268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.081296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.081486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.081512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.081716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.081741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.081949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.081974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.082179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.082204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.082415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.082440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.082642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.082668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.082852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.082879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.083078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.083105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.083289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.083314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.083503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.083529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.083720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.083746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.083925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.083956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.084154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.084194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.084382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.084410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.084600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.084627] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.084840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.084867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.085077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.085105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.085298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.085324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.085526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.085551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.085729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.085755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.085951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.085978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.086163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.086189] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.086428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.086454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.086645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.086671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.086874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.086900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.087095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.087128] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.087343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.087369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.087555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.087581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.087973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.088001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.088182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.088209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.088396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.088423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.088638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.088665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.088852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.088878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.089076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.089103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.089287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.089313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.089493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.089517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.089736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.089761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.089944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.089969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.090152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.090177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.090386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.090412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.090599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.090625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.090812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.090838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.091046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.091085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.091273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.091300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.091507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.091533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.091716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.091742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.091927] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.091959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.092141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.092167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.092375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.092403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.092734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.092760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.092982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.093009] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.019 qpair failed and we were unable to recover it. 00:22:09.019 [2024-05-15 11:02:25.093195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.019 [2024-05-15 11:02:25.093220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.093398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.093430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.093638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.093664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.093841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.093868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.094050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.094078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.094271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.094297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.094483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.094509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.094712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.094738] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.094923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.094954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.095145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.095170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.095350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.095378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.095609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.095637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.095847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.095873] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.096089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.096115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.096304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.096330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.096522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.096548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.096733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.096758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.096935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.096962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.097146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.097174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.097369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.097395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.097588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.097614] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.097801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.097826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.098012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.098038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.098232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.098258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.098469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.098494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.098673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.098698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.098876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.098904] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.099139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.099166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.099356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.099386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.099558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.099584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.099758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.099783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.099976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.100002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.100189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.100215] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.100397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.100425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.100639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.100666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.100842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.100868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.101044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.101070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.101278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.101304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.101511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.101536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.101739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.101764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.101965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.101992] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.102179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.102205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.102574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.102600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.102833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.102859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.103050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.103076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.103266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.103291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.103473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.103498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.103701] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.103726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.104110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.104136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.104314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.104338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.104550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.104575] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.104765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.104791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.105006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.105032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.105206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.105232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.105432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.105458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.105643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.105671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.105870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.105896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.106112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.106138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.106384] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.106410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.106612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.106637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.106993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.107019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.020 [2024-05-15 11:02:25.107195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.020 [2024-05-15 11:02:25.107220] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.020 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.107431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.107460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.107657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.107682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.107887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.107912] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.108106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.108132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.108349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.108375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.108571] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.108596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.108770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.108795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.109019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.109046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.109241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.109266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.109440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.109466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.109656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.109681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.109862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.109886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.110078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.110105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.110315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.110343] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.110535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.110560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.110745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.110770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.110958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.110985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.111164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.111188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.111573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.111597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.111825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.111851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.112030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.112056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.112251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.112277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.112462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.112487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.112699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.112725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.112911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.112941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.113124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.113150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.113320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.113346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.113577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.113603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.113836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.113862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.114049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.114076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.114281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.114307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.114499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.114524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.114695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.114721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.114900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.114925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.115142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.115175] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.115355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.115382] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.115590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.115615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.115808] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.115833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.116035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.116061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.116240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.116265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.116466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.116491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.116683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.116709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.116900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.116956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.117154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.117180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.117372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.117398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.117611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.117636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.117818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.117843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.118051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.118077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.118278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.118304] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.118475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.118501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.118683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.118708] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.118920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.118952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.119144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.119170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.119377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.119403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.119625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.119650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.119836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.119861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.120053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.120080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.120281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.120307] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.120511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.120537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.120720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.120745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.120943] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.120969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.121155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.121184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.121365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.121390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.121600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.121625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.121816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.121844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.122080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.122107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.122290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.021 [2024-05-15 11:02:25.122315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.021 qpair failed and we were unable to recover it. 00:22:09.021 [2024-05-15 11:02:25.122522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.122547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.122734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.122759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.122938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.122965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.123148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.123173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.123369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.123395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.123606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.123633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.123842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.123867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.124078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.124104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.124310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.124335] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.124523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.124548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.124762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.124788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.124996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.125022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.125230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.125257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.125441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.125466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.125654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.125679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.125892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.125917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.126108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.126133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.126309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.126336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.126517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.126544] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.126726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.126752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.126960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.126987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.127188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.127217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.127402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.127427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.127611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.127637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.127820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.127845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.128035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.128062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.128238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.128264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.128447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.128473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.128667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.128693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.128898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.128924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.129127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.129153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.129331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.129356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.129561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.129586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.129759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.129785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.130003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.130029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.130241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.130267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.130447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.130472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.130652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.130677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.131012] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.131039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.131245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.131271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.131462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.131488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.131700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.131728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.131906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.131938] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.132122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.132148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.132337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.132363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.132557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.132584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.132759] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.132784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.132994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.133020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.133224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.133251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.133429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.133454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.133643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.133669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.133852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.133876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.134095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.134121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.134322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.134347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.134533] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.134561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.134738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.134765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.134956] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.134983] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.135161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.135188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.135408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.135433] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.135607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.135632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.135810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.135835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.136055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.136081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.136297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.136326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.136521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.136547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.136727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.136752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.136940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.136966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.137137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.137163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.137391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.137416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.137630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.137655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.137966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.137993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.138214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.138240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.138436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.022 [2024-05-15 11:02:25.138461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.022 qpair failed and we were unable to recover it. 00:22:09.022 [2024-05-15 11:02:25.138687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.138712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.138922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.138952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.139142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.139167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.139356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.139381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.139576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.139602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.139788] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.139814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.140006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.140032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.140206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.140230] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.140408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.140435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.140644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.140669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.140872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.140897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.141088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.141114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.141320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.141346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.141536] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.141563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.141877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.141903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.142291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.142333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.142531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.142559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.142767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.142804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.143017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.143045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.143235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.143261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.143461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.143487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.143693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.143719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.143918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.143950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.144130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.144156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.144394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.144420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.144600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.144626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.144798] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.144823] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.145013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.145039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.145224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.145250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.145426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.145452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.145643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.145668] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.145915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.145949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.146142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.146167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.146355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.146381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.146586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.146612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.146792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.146818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.147013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.147039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.147246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.147271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.147450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.147475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.147699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.147724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.147940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.147967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.148145] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.148171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.148361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.148386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.148591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.148617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.148796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.148825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.149000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.149026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.149242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.149267] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.149447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.149472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.149682] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.149710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.149896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.149922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.150106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.150131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.150321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.150347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.150561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.150586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.150791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.150816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.151022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.151049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.151254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.151281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.151493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.151518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.151708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.151734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.151934] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.151960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.152187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.152212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.152431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.152459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.023 [2024-05-15 11:02:25.152668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.023 [2024-05-15 11:02:25.152693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.023 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.152873] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.152898] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.153094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.153120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.153299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.153325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.153511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.153536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.153746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.153771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.153954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.153980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.154168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.154195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.154411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.154436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.154632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.154657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.154841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.154867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.155056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.155084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.155256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.155282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.155498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.155523] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.155714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.155742] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.155922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.155957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.156135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.156161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.156388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.156413] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.156640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.156665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.156844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.156869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.157068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.157094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.157280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.157306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.157507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.157533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.157742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.157767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.157957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.157984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.158168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.158193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.158374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.158399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.158609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.158636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.158820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.158847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.159054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.159082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.159258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.159286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.159501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.159526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.159737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.159763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.159964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.159991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.160175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.160201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.160382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.160407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.160634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.160659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.160861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.160890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.161089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.161115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.161299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.161324] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.161508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.161533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.161750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.161775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.161960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.161987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.162163] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.162188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.162405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.162431] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.162614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.162640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.162823] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.162849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.163026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.163052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.163260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.163285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.163456] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.163482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.163711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.163736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.163955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.163985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.164164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.164190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.164372] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.164398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.164572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.164597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.164807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.164832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.165024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.165050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.165260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.165285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.165455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.165481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.165658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.165683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.165896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.165923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.166151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.166177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.166390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.166415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.166593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.166620] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.166804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.166829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.167049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.167076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.167287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.167313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.167490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.167517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.024 qpair failed and we were unable to recover it. 00:22:09.024 [2024-05-15 11:02:25.167733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.024 [2024-05-15 11:02:25.167759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.167976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.168002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.168216] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.168241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.168417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.168442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.168651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.168676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.168861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.168887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.169095] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.169121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.169304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.169331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.169544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.169570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.169748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.169775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.169961] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.169991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.170198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.170223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.170431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.170457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.170643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.170669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.170846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.170871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.171056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.171081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.171310] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.171336] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.171578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.171604] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.171816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.171841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.172033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.172062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.172262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.172287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.172478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.172506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.172691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.172717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.172924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.172954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.173144] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.173170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.173362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.173387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.173568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.173592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.173810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.173837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.174051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.174077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.174258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.174283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.174513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.174538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.174720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.174745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.174940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.174966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.175183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.175208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.175435] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.175461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.175676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.175702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.175910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.175952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.176138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.176164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.176421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.176447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.176685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.176710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.176902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.176936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.177137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.177164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.177348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.177374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.177581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.177607] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.177800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.177825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.178052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.178080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.178260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.178286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.025 [2024-05-15 11:02:25.178495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.025 [2024-05-15 11:02:25.178520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.025 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.178723] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.178749] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.178955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.178981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.179165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.179190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.179406] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.179432] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.179619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.179644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.179849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.179874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.180083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.180109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.180303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.180330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.180506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.180530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.180741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.180767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.180974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.181000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.181187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.181213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.181386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.181412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.181623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.181649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.181832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.181858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.182044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.182071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.182258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.182284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.182493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.182518] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.182720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.182745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.182923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.182962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.183147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.183172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.183355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.183380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.183585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.183611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.183791] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.183817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.184003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.184029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.184202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.184228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.184408] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.184434] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.184612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.184637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.184850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.184875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.185050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.185076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.185269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.185299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.185479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.185505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.185678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.185704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.185915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.185954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.186133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.186158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.186368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.186393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.186603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.186628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.186841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.186867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.187067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.187094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.187272] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.187297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.187499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.187525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.187724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.187750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.187962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.187988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.188181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.188206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.188401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.188427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.188624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.188650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.188847] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.188872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.189042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.189068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.189237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.026 [2024-05-15 11:02:25.189262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.026 qpair failed and we were unable to recover it. 00:22:09.026 [2024-05-15 11:02:25.189471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.189496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.189705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.189730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.189913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.189943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.190131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.190157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.190338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.190364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.190541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.190567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.190750] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.190776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.190985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.191011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.191187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.191216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.191451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.191476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.191669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.191695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.191875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.191900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.192085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.192111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.192301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.192327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.192526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.192552] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.192757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.192782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.192960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.192987] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.193195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.193221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.193411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.193437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.193613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.193638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.193828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.193853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.194039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.194065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.194255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.194281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.194514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.194539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.194746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.194772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.194985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.195012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.195196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.195221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.195428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.195454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.195637] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.195662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.195870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.195895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.196106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.196132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.196320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.196346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.196530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.196557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.196765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.196791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.196991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.197017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.197201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.197231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.197409] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.197435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.197617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.197643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.197858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.197883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.198071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.198097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.198297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.198323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.198501] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.198526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.198694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.198720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.198926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.198958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.199133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.199158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.199369] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.199395] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.199606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.199631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.199853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.199880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.200092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.200118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.200305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.200331] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.200507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.200532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.200716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.200741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.200946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.200973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.201176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.027 [2024-05-15 11:02:25.201201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.027 qpair failed and we were unable to recover it. 00:22:09.027 [2024-05-15 11:02:25.201374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.201401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.201600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.201626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.201804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.201829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.202011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.202038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.202213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.202241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.202428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.202454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.202629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.202654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.202840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.202866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.203055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.203084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.203271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.203298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.203512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.203538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.203746] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.203771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.203950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.203976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.204158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.204184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.204415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.204441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.204622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.204648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.204835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.204861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.205096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.205122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.205331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.205356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.205568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.205594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.205772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.205797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.205981] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.206008] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.206207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.206233] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.206439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.206465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.206654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.206679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.206851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.206876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.207086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.207112] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.207286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.207312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.207500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.207526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.207709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.207734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.207916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.207953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.208127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.208152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.208331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.208356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.208565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.208590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.208789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.208815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.209013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.209039] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.209252] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.209277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.209455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.209480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.209664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.209690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.209906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.209946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.210122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.210148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.210351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.210376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.210551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.210576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.210745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.210771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.210967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.210993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.211210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.211238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.211412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.211438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.211644] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.211669] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.211872] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.211897] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.212089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.212120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.212295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.212321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.212531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.212558] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.212740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.212765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.212945] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.212971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.213180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.213206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.213411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.213436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.213638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.213663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.213842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.213867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.214046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.214073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.214278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.214303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.214500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.214526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.214711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.214736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.028 [2024-05-15 11:02:25.214949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.028 [2024-05-15 11:02:25.214975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.028 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.215167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.215193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.215365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.215390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.215592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.215617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.215846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.215872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.216061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.216087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.216265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.216291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.216500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.216525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.216747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.216772] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.216973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.216999] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.217209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.217234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.217410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.217436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.217613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.217638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.217843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.217869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.218049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.218082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.218285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.218311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.029 [2024-05-15 11:02:25.218527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.029 [2024-05-15 11:02:25.218553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.029 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.218738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.218763] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.218967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.218994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.219187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.219212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.219423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.219449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.219632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.219658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.219839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.219864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.220042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.220068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.220246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.220273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.220458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.220489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.220675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.220701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.220925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.220957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.221143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.306 [2024-05-15 11:02:25.221170] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.306 qpair failed and we were unable to recover it. 00:22:09.306 [2024-05-15 11:02:25.221379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.221405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.221585] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.221612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.221824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.221850] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.222062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.222089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.222276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.222302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.222478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.222504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.222683] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.222709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.222894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.222921] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.223143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.223171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.223377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.223403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.223590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.223618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.223799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.223825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.224028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.224056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.224270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.224296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.224473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.224499] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.224715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.224740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.224912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.224943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.225129] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.225155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.225354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.225380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.225593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.225618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.225796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.225822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.226011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.226038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.226249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.226275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.226480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.226506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.226688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.226714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.226936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.226963] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.227182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.227208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.227387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.227414] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.227626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.227652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.227826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.227851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.228051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.228078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.228281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.228306] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.228537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.228563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.228785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.228810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.229019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.229045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.229232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.307 [2024-05-15 11:02:25.229258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.307 qpair failed and we were unable to recover it. 00:22:09.307 [2024-05-15 11:02:25.229459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.229484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.229672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.229698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.229937] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.229964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.230149] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.230174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.230390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.230416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.230607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.230633] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.230815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.230841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.231032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.231059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.231235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.231263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.231467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.231493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.231662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.231687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.231863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.231888] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.232096] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.232122] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.232327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.232352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.232528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.232553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.232754] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.232782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.232985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.233012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.233189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.233221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.233424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.233449] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.233630] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.233655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.233857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.233882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.234088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.234115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.234316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.234341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.234527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.234554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.234740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.234767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.234976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.235003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.235182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.235207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.235426] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.235451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.235631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.235657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.235842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.235868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.236088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.236115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.236300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.236326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.236522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.236548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.236757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.308 [2024-05-15 11:02:25.236783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.308 qpair failed and we were unable to recover it. 00:22:09.308 [2024-05-15 11:02:25.236965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.236991] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.237200] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.237227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.237446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.237472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.237668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.237693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.237941] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.237967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.238158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.238184] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.238365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.238390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.238575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.238600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.238774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.238799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.238994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.239024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.239233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.239262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.239476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.239501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.239675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.239701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.239882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.239907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.240090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.240116] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.240322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.240348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.240526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.240551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.240724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.240750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.240933] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.240959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.241131] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.241156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.241338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.241364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.241594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.241619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.241794] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.241819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.242004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.242032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.242244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.242270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.242476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.242501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.242688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.242713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.242894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.242920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.243105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.243131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.243342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.243368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.243553] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.243579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.243767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.243792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.243969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.243996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.244206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.244232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.244415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.244440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.244641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.309 [2024-05-15 11:02:25.244667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.309 qpair failed and we were unable to recover it. 00:22:09.309 [2024-05-15 11:02:25.244845] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.244870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.245089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.245119] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.245330] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.245356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.245567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.245593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.245807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.245832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.246006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.246032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.246266] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.246291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.246497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.246522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.246703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.246731] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.246906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.246937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.247126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.247152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.247322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.247347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.247566] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.247591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.247777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.247802] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.248001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.248027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.248218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.248245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.248451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.248477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.248686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.248713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.248894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.248919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.249101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.249127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.249301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.249327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.249514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.249539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.249716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.249741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.249923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.249954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.250159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.250185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.250396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.250421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.250602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.250628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.250840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.250866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.251043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.251070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.251256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.251282] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.251497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.251522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.251700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.251726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.251895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.251920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.252110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.252135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.252331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.252356] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.252568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.252593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.252764] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.310 [2024-05-15 11:02:25.252789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.310 qpair failed and we were unable to recover it. 00:22:09.310 [2024-05-15 11:02:25.252975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.253002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.253180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.253208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.253414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.253440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.253676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.253701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.253906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.253937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.254122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.254148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.254325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.254351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.254535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.254562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.254740] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.254768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.254984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.255010] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.255191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.255216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.255389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.255415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.255601] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.255626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.255805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.255831] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.256018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.256044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.256228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.256253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.256455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.256482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.256667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.256692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.256893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.256919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.257136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.257162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.257336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.257361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.257569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.257595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.257801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.257826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.258031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.258058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.258235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.258261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.258475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.258500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.258684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.258709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.311 qpair failed and we were unable to recover it. 00:22:09.311 [2024-05-15 11:02:25.258886] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.311 [2024-05-15 11:02:25.258911] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.259097] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.259123] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.259306] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.259332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.259510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.259539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.259741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.259768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.259982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.260012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.260218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.260244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.260422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.260448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.260639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.260665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.260843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.260869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.261057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.261083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.261283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.261309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.261514] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.261539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.261742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.261767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.262003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.262030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.262209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.262234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.262438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.262463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.262665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.262690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.262861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.262886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.263074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.263101] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.263303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.263329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.263544] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.263569] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.263757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.263785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.263996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.264023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.264235] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.264262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.264447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.264475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.264658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.264684] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.264895] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.264920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.265110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.265136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.265305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.265330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.265513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.265538] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.265744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.265769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.265950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.265981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.266195] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.266223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.266436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.266462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.266667] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.266693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.312 [2024-05-15 11:02:25.266889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.312 [2024-05-15 11:02:25.266914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.312 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.267114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.267140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.267343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.267368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.267569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.267595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.267810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.267835] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.268019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.268046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.268228] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.268253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.268436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.268462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.268636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.268661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.268837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.268863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.269079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.269105] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.269283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.269311] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.269516] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.269542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.269718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.269744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.269922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.269955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.270143] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.270169] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.270371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.270396] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.270569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.270594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.270826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.270851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.271030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.271057] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.271239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.271265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.271447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.271472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.271652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.271677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.271885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.271909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.272135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.272161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.272350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.272376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.272587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.272613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.272827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.272852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.273035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.273062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.273264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.273290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.273477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.273502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.273673] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.273698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.273881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.273906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.274094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.274120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.274333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.274358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.274535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.274559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.274763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.274788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.313 [2024-05-15 11:02:25.274986] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.313 [2024-05-15 11:02:25.275012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.313 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.275213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.275238] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.275442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.275467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.275679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.275704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.275890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.275916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.276109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.276136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.276342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.276368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.276541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.276566] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.276775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.276800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.276989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.277015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.277197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.277224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.277425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.277451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.277651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.277677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.277866] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.277891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.278110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.278136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.278324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.278349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.278538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.278565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.278777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.278803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.279011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.279038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.279215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.279241] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.279422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.279448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.279621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.279646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.279832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.279857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.280035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.280060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.280245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.280271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.280447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.280472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.280677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.280705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.280913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.280949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.281160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.281186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.281371] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.281397] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.281625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.281650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.281826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.281852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.282049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.282075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.282247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.282273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.282457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.282484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.282686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.282712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.314 [2024-05-15 11:02:25.282921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.314 [2024-05-15 11:02:25.282958] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.314 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.283175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.283201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.283410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.283436] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.283645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.283671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.283855] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.283881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.284078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.284104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.284314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.284340] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.284546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.284572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.284781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.284807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.285015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.285042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.285245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.285271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.285482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.285507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.285694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.285719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.285899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.285924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.286110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.286136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.286312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.286338] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.286511] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.286537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.286710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.286735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.286921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.286956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.287140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.287166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.287343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.287369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.287574] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.287599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.287771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.287798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.288004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.288031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.288243] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.288269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.288440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.288465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.288647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.288673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.288851] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.288877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.289083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.289110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.289293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.289318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.289500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.289527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.289706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.289732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.289915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.289956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.290139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.290165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.315 [2024-05-15 11:02:25.290353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.315 [2024-05-15 11:02:25.290378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.315 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.290587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.290612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.290802] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.290829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.291017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.291043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.291254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.291280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.291461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.291487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.291662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.291688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.291892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.291917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.292113] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.292139] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.292322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.292347] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.292546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.292572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.292780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.292809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.293010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.293037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.293240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.293266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.293440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.293465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.293646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.293671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.293874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.293900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.294116] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.294142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.294315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.294342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.294534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.294561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.294753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.294780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.294970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.294997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.295174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.295199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.295379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.295404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.295610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.295636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.295850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.295875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.296055] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.296081] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.296286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.296312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.296517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.296542] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.296745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.316 [2024-05-15 11:02:25.296771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.316 qpair failed and we were unable to recover it. 00:22:09.316 [2024-05-15 11:02:25.296955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.296982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.297161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.297186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.297373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.297398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.297612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.297637] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.297813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.297840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.298051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.298077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.298284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.298309] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.298515] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.298540] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.298716] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.298741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.298949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.298975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.299152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.299177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.299381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.299407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.299583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.299608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.299778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.299803] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.300035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.300061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.300253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.300278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.300475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.300500] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.300678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.300704] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.300902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.300935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.301148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.301173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.301374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.301400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.301617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.301642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.301854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.301880] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.302070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.302097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.302298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.302323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.302495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.302521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.302718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.302744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.302917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.302948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.303152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.303178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.303359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.303384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.303592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.303617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.303795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.303820] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.304006] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.304033] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.304223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.304248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.304436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.304462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.304640] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.304665] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.304858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.304883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.317 qpair failed and we were unable to recover it. 00:22:09.317 [2024-05-15 11:02:25.305054] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.317 [2024-05-15 11:02:25.305080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.305259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.305286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.305487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.305513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.305717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.305743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.305913] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.305956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.306155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.306186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.306362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.306388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.306565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.306592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.306767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.306793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.306979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.307016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.307230] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.307256] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.307466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.307492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.307677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.307707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.307888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.307913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.308109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.308136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.308355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.308381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.308590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.308615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.308820] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.308846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.309057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.309085] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.309268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.309293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.309476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.309501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.309712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.309737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.309940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.309966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.310170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.310198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.310400] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.310425] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.310600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.310625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.310832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.310858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.311043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.311070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.311258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.311284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.311470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.311494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.311689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.311715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.311887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.311913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.312122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.312148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.312326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.312352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.312569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.312595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.312778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.312804] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.312997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.313025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.318 qpair failed and we were unable to recover it. 00:22:09.318 [2024-05-15 11:02:25.313209] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.318 [2024-05-15 11:02:25.313236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.313446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.313474] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.313657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.313687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.313859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.313884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.314070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.314099] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.314303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.314329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.314505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.314531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.314718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.314745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.314922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.314955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.315141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.315168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.315353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.315379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.315568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.315594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.315783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.315810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.315988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.316015] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.316191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.316217] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.316415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.316441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.316629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.316655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.316843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.316869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.317076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.317103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.317281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.317308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.317507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.317534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.317739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.317765] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.317979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.318005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.318187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.318214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.318425] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.318451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.318658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.318683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.318881] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.318906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.319126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.319153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.319355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.319380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.319557] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.319583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.319758] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.319784] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.319 [2024-05-15 11:02:25.319978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.319 [2024-05-15 11:02:25.320006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.319 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.320221] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.320246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.320476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.320502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.320693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.320721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.320899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.320924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.321124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.321150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.321360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.321387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.321629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.321655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.321861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.321886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.322093] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.322120] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.322349] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.322374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.322548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.322573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.322783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.322809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.323033] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.323060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.323248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.323274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.323464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.323489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.323681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.323709] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.323911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.323960] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.324141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.324167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.324350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.324376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.324587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.324612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.324824] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.324849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.325029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.325056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.325240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.325265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.325453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.325485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.325709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.325736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.325954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.325980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.326169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.326197] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.326394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.326421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.326608] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.326634] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.326846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.326871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.327049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.327075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.327258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.327284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.327465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.327491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.327664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.327690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.327907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.327939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.328120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.320 [2024-05-15 11:02:25.328146] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.320 qpair failed and we were unable to recover it. 00:22:09.320 [2024-05-15 11:02:25.328328] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.328354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.328521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.328547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.328731] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.328760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.328951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.328978] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.329165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.329191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.329378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.329405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.329627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.329652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.329846] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.329870] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.330081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.330107] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.330313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.330339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.330549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.330574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.330748] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.330773] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.330985] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.331012] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.331199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.331224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.331438] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.331464] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.331647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.331673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.331874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.331900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.332100] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.332127] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.332303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.332329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.332537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.332562] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.332737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.332764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.332955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.332982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.333161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.333187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.333375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.333400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.333607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.333632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.333815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.333840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.334042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.334068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.334282] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.334308] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.334480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.334506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.334678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.334707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.321 qpair failed and we were unable to recover it. 00:22:09.321 [2024-05-15 11:02:25.334918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.321 [2024-05-15 11:02:25.334951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.335167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.335193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.335393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.335419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.335592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.335617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.335848] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.335875] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.336067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.336093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.336262] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.336288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.336504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.336530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.336714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.336740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.336916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.336948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.337130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.337156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.337329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.337355] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.337563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.337588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.337769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.337795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.337976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.338003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.338174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.338200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.338382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.338407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.338589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.338615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.338817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.338842] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.339046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.339072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.339256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.339285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.339462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.339487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.339698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.339723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.339921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.339952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.340172] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.340198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.340375] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.340401] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.340615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.340644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.340850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.340876] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.341050] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.341076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.341263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.341288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.341479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.341504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.341709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.341734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.341917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.341949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.342124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.342152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.342362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.342390] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.342592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.342618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.342829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.342857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.343041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.343068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.343248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.322 [2024-05-15 11:02:25.343273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.322 qpair failed and we were unable to recover it. 00:22:09.322 [2024-05-15 11:02:25.343472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.343497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.343713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.343740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.343958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.343985] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.344168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.344195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.344428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.344454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.344688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.344714] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.344940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.344967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.345181] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.345207] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.345379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.345405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.345614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.345639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.345822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.345847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.346030] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.346056] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.346250] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.346275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.346446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.346472] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.346679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.346705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.346893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.346919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.347115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.347141] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.347319] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.347345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.347545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.347571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.347785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.347811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.347999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.348025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.348217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.348243] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.348418] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.348444] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.348626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.348653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.348885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.348910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.349092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.349118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.349305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.349330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.349507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.349532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.349739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.349780] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.349974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.350003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.350196] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.350222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.350402] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.350429] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.350609] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.350636] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.350842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.350869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.351085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.351113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.351303] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.351328] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.351500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.351527] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.351732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.351758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.351967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.351993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.352169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.352194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.323 [2024-05-15 11:02:25.352368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.323 [2024-05-15 11:02:25.352394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.323 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.352625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.352650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.352853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.352879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.353073] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.353100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.353277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.353303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.353542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.353567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.353776] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.353801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.354011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.354038] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.354210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.354236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.354422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.354447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.354677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.354703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.354889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.354914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.355134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.355160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.355344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.355370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.355552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.355578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.355747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.355776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.355967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.355994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.356171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.356198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.356401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.356426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.356604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.356632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.356813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.356840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.357022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.357048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.357236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.357261] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.357472] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.357497] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.357668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.357693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.357869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.357894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.358134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.358161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.358367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.358392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.358594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.358619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.358807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.358832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.359020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.359046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.359231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.359257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.359465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.359491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.359665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.359690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.359867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.359892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.360104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.360130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.360308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.360333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.360547] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.360573] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.360760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.360785] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.360970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.360997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.361178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.361203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.361388] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.361412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.361610] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.324 [2024-05-15 11:02:25.361640] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.324 qpair failed and we were unable to recover it. 00:22:09.324 [2024-05-15 11:02:25.361822] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.361847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.362051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.362077] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.362288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.362313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.362522] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.362547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.362721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.362746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.362925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.362956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.363154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.363180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.363355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.363380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.363590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.363615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.363792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.363817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.363993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.364019] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.364202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.364228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.364454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.364479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.364662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.364687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.364890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.364915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.365117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.365143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.365315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.365341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.365550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.365576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.365782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.365808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.365997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.366024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.366197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.366222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.366431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.366456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.366635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.366660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.366838] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.366863] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.367038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.367066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.367244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.367271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.367478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.367510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.367721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.367747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.367925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.367956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.368128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.368154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.368362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.368388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.368594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.368619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.368805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.368830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.369035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.369061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.369242] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.325 [2024-05-15 11:02:25.369268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.325 qpair failed and we were unable to recover it. 00:22:09.325 [2024-05-15 11:02:25.369475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.369501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.369707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.369733] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.369917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.369951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.370154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.370180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.370386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.370411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.370626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.370652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.370860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.370886] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.371065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.371093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.371271] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.371297] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.371502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.371529] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.371736] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.371761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.371968] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.371995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.372186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.372211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.372410] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.372435] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.372622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.372647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.372880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.372905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.373122] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.373149] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.373322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.373348] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.373552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.373577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.373763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.373788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.373999] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.374026] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.374207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.374232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.374412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.374438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.374606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.374631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.374810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.374836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.375028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.375055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.375225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.375250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.375433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.375458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.375657] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.375682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.375889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.375916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.376136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.376163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.376347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.376372] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.376579] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.376605] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.376775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.376801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.377000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.377027] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.377226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.377252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.377455] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.377480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.377653] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.377678] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.377908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.377939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.378141] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.378166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.378366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.378392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.378573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.326 [2024-05-15 11:02:25.378599] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.326 qpair failed and we were unable to recover it. 00:22:09.326 [2024-05-15 11:02:25.378781] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.378808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.379044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.379070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.379265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.379290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.379460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.379485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.379679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.379705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.379890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.379915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.380101] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.380126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.380338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.380364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.380600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.380625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.380814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.380839] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.381049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.381075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.381254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.381279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.381480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.381505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.381713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.381739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.381921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.381952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.382160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.382186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.382391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.382416] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.382626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.382655] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.382839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.382864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.383048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.383074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.383264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.383290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.383476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.383502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.383726] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.383751] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.383940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.383966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.384170] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.384195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.384380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.384405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.384591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.384617] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.384826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.384853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.385052] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.385079] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.385256] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.385281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.385466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.385492] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.385674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.385700] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.385935] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.385961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.386130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.386156] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.386363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.386389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.386603] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.386628] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.386830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.386855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.387062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.387088] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.387325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.387350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.387528] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.387553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.387730] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.387756] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.327 [2024-05-15 11:02:25.387940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.327 [2024-05-15 11:02:25.387966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.327 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.388138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.388163] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.388333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.388358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.388527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.388556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.388770] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.388795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.388971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.388997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.389188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.389213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.389412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.389437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.389621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.389646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.389816] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.389841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.390060] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.390086] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.390274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.390299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.390471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.390496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.390674] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.390699] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.390867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.390893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.391103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.391129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.391341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.391368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.391606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.391631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.391804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.391830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.392039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.392065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.392239] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.392265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.392500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.392525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.392722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.392747] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.392922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.392959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.393162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.393188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.393398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.393423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.393605] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.393631] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.393812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.393837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.394046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.394072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.394258] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.394284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.394459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.394488] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.394678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.394703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.394909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.394940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.395176] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.395202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.395398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.395423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.395598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.395623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.395827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.395852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.396028] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.396054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.396265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.396291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.396466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.396491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.396700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.396725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.396923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.396953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.397130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.397155] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.397337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.397364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.328 [2024-05-15 11:02:25.397576] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.328 [2024-05-15 11:02:25.397602] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.328 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.397805] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.397830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.398002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.398029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.398232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.398257] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.398467] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.398493] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.398696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.398722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.398892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.398917] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.399107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.399132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.399321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.399346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.399529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.399555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.399768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.399794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.399972] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.399998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.400204] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.400229] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.400430] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.400455] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.400634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.400660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.400836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.400862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.401039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.401065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.401248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.401273] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.401454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.401479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.401670] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.401696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.401902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.401928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.402120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.402147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.402327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.402352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.402554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.402580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.402807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.402832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.403024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.403050] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.403255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.403281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.403453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.403482] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.403714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.403739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.403915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.403945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.404155] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.404181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.404362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.404388] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.404590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.404615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.404783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.404808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.404994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.405020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.405197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.405222] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.405428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.405453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.405650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.405675] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.405882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.405907] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.406106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.406132] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.406314] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.406339] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.406543] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.406568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.406743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.406768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.406974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.329 [2024-05-15 11:02:25.407000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.329 qpair failed and we were unable to recover it. 00:22:09.329 [2024-05-15 11:02:25.407194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.407219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.407401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.407427] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.407607] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.407632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.407840] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.407865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.408078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.408104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.408285] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.408310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.408496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.408521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.408695] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.408720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.408923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.408954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.409134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.409160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.409387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.409417] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.409629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.409654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.409828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.409853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.410038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.410064] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.410273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.410299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.410504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.410530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.410711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.410737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.410970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.411005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.411179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.411205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.411380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.411406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.411581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.411606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.411811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.411838] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.412035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.412061] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.412273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.412298] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.412503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.412528] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.412729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.412754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.412944] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.412970] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.413175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.413201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.413421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.413447] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.413654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.413679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.413860] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.413885] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.414077] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.414104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.414301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.414326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.414500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.414525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.414700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.414725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.414909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.414941] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.415174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.415199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.415379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.415408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.415616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.415641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.415821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.415846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.416080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.416106] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.330 [2024-05-15 11:02:25.416289] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.330 [2024-05-15 11:02:25.416314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.330 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.416505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.416530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.416709] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.416736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.416938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.416964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.417150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.417176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.417345] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.417370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.417552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.417578] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.417783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.417808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.418041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.418068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.418240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.418265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.418476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.418502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.418705] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.418730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.418936] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.418962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.419135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.419160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.419337] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.419362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.419565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.419590] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.419771] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.419797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.419993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.420020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.420232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.420258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.420460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.420486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.420690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.420715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.420890] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.420915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.421109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.421135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.421326] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.421351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.421531] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.421557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.421766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.421791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.421998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.422025] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.422217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.422242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.422419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.422445] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.422623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.422649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.422852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.422877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.423083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.423109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.423288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.423313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.423512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.423537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.423710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.423735] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.423970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.423996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.424169] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.424194] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.424412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.424438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.424639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.424664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.424843] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.424869] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.425047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.425073] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.425277] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.425302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.425473] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.425498] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.425704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.425729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.425905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.425935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.426137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.426162] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.426363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.426389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.426569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.331 [2024-05-15 11:02:25.426594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.331 qpair failed and we were unable to recover it. 00:22:09.331 [2024-05-15 11:02:25.426773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.426800] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.427002] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.427029] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.427210] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.427236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.427414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.427440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.427619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.427644] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.427818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.427843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.428034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.428060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.428241] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.428268] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.428446] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.428471] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.428638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.428663] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.428864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.428889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.429086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.429113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.429317] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.429342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.429512] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.429537] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.429729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.429754] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.429939] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.429965] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.430137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.430167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.430354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.430380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.430560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.430585] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.430796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.430822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.431023] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.431049] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.431251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.431277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.431484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.431510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.431693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.431720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.431923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.431956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.432140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.432166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.432341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.432367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.432542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.432568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.432773] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.432798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.432980] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.433006] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.433213] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.433239] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.433469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.433495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.433676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.433702] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.433901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.433927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.434146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.434172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.434350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.434375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.434587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.434612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.434799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.434826] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.435004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.435030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.435240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.435265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.435440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.435465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.435680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.435705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.435905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.435935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.436124] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.436153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.436341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.436368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.436551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.436576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.436751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.436776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.436955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.436981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.437158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.332 [2024-05-15 11:02:25.437183] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.332 qpair failed and we were unable to recover it. 00:22:09.332 [2024-05-15 11:02:25.437361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.437386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.437617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.437642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.437818] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.437843] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.438025] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.438053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.438287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.438312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.438495] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.438520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.438733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.438759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.438942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.438968] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.439186] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.439211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.439394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.439419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.439624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.439650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.439839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.439864] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.440048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.440074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.440260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.440285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.440452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.440477] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.440661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.440686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.440869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.440894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.441087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.441114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.441295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.441322] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.441523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.441548] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.441718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.441743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.441923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.441955] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.442134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.442160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.442360] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.442385] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.442588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.442613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.442799] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.442825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.443027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.443053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.443257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.443283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.443462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.443487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.443656] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.443681] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.443884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.443910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.444108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.444135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.444342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.333 [2024-05-15 11:02:25.444368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.333 qpair failed and we were unable to recover it. 00:22:09.333 [2024-05-15 11:02:25.444570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.444595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.444769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.444795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.445004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.445030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.445217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.445242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.445440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.445465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.445664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.445689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.445898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.445924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.446111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.446137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.446344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.446369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.446550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.446577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.446785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.446810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.447019] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.447045] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.447251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.447277] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.447487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.447512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.447700] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.447725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.447904] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.447934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.448132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.448157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.448361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.448386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.448565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.448591] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.448792] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.448817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.449003] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.449030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.449211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.449236] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.449415] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.449440] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.449641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.449666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.449876] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.449901] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.450084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.450110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.450325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.450351] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.450532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.450557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.450744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.450769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.450964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.450995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.451175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.451201] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.451377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.451403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.451604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.451629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.451804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.451829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.452008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.452035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.452244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.452270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.452453] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.452479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.334 [2024-05-15 11:02:25.452654] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.334 [2024-05-15 11:02:25.452679] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.334 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.452880] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.452905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.453085] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.453111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.453301] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.453327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.453539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.453564] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.453749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.453774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.453966] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.453993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.454180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.454205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.454382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.454407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.454639] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.454664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.454870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.454895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.455105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.455131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.455320] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.455345] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.455518] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.455543] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.455749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.455774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.455967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.455993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.456175] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.456200] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.456396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.456422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.456598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.456623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.456827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.456859] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.457039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.457065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.457238] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.457263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.457465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.457490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.457697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.457722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.457901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.457927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.458112] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.458137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.458353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.458378] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.458584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.458609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.458786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.458811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.459015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.459040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.459253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.459278] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.459454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.459479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.459711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.459736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.459947] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.459973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.460146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.460171] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.460355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.460381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.460587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.460612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.460819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.460844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.461026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.461052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.461263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.335 [2024-05-15 11:02:25.461290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.335 qpair failed and we were unable to recover it. 00:22:09.335 [2024-05-15 11:02:25.461483] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.461509] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.461711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.461736] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.461938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.461964] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.462199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.462225] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.462437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.462462] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.462632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.462658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.462859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.462890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.463076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.463102] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.463305] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.463330] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.463562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.463588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.463766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.463791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.463967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.463994] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.464201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.464227] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.464434] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.464460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.464661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.464686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.464867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.464892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.465117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.465143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.465351] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.465376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.465545] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.465570] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.465761] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.465787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.465971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.465997] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.466206] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.466231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.466405] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.466430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.466661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.466686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.466869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.466894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.467106] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.467131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.467344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.467369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.467551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.467577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.467757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.467783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.467969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.467995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.468184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.468210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.468387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.468412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.468616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.468642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.468849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.468874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.469087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.469113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.469292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.469317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.336 [2024-05-15 11:02:25.469498] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.336 [2024-05-15 11:02:25.469524] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.336 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.469714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.469741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.469915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.469948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.470135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.470161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.470334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.470358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.470558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.470583] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.470762] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.470788] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.470973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.471000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.471185] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.471211] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.471411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.471437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.471642] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.471667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.471870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.471896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.472084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.472111] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.472324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.472349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.472527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.472553] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.472753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.472778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.472959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.472986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.473161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.473186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.473394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.473420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.473596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.473621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.473835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.473860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.474043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.474069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.474280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.474305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.474480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.474505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.474712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.474739] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.474946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.474972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.475152] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.475178] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.475393] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.475418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.475631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.475657] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.475857] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.475883] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.476061] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.476087] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.476263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.476288] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.476464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.476489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.476688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.476713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.476892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.476918] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.477165] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.477191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.477390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.477415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.477615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.477641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.477828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.477857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.478065] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.478091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.478265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.478291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.478478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.337 [2024-05-15 11:02:25.478506] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.337 qpair failed and we were unable to recover it. 00:22:09.337 [2024-05-15 11:02:25.478706] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.478732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.478909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.478939] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.479120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.479145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.479329] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.479354] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.479561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.479587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.479772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.479798] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.480005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.480032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.480217] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.480242] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.480427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.480453] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.480628] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.480654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.480865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.480891] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.481107] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.481133] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.481339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.481365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.481581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.481606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.481815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.481841] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.482045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.482072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.482248] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.482274] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.482476] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.482502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.482684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.482710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.482915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.482952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.483139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.483164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.483348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.483373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.483572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.483598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.483807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.483837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.484017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.484043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.484215] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.484240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.484414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.484439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.484650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.484676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.484865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.484892] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.485134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.485161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.485335] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.485361] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.485532] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.485557] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.338 [2024-05-15 11:02:25.485767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.338 [2024-05-15 11:02:25.485792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.338 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.486022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.486048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.486231] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.486258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.486464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.486490] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.486672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.486698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.486889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.486914] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.487091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.487117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.487298] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.487323] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.487534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.487559] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.487742] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.487769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.487996] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.488024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.488201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.488228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.488413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.488439] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.488648] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.488673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.488859] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.488884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.489071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.489097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.489275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.489300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.489478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.489503] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.489712] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.489737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.489953] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.489979] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.490166] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.490191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.490367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.490392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.490591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.490616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.490803] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.490828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.491011] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.491036] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.491246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.491271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.491462] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.491489] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.491663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.491689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.491901] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.491926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.492132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.492158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.492338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.492363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.492569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.492595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.492804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.492830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.493015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.493040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.493247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.493272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.493507] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.493532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.493703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.493728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.493957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.493984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.494184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.494209] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.494397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.494423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.494629] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.494654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.494836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.494862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.495044] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.495071] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.495259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.495284] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.495466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.495491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.495697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.495722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.495911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.495951] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.496126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.496152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.496359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.496384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.339 qpair failed and we were unable to recover it. 00:22:09.339 [2024-05-15 11:02:25.496562] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.339 [2024-05-15 11:02:25.496587] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.496786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.496811] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.496991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.497017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.497198] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.497223] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.497401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.497426] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.497625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.497650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.497821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.497846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.498040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.498067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.498279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.498305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.498486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.498512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.498690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.498720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.498909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.498940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.499148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.499174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.499353] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.499379] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.499584] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.499610] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.499789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.499815] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.500018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.500044] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.500220] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.500245] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.500448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.500473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.500679] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.500705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.500902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.500928] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.501140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.501165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.501342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.501367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.501552] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.501579] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.501763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.501790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.501969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.501996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.502207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.502232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.502443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.502468] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.502647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.502673] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.502907] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.502937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.503146] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.503172] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.503350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.503375] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.503581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.503606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.503782] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.503808] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.503998] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.504024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.504194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.504219] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.504433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.504459] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.504641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.504671] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.504882] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.504908] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.505092] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.505118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.505309] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.505334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.505537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.505561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.340 [2024-05-15 11:02:25.505739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.340 [2024-05-15 11:02:25.505764] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.340 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.505962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.505989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.506178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.506204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.506390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.506415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.506586] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.506611] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.506785] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.506810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.507016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.507042] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.507219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.507244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.507413] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.507438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.507645] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.507670] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.507849] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.507874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.508105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.508131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.508340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.508366] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.508538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.508563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.508768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.508793] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.508970] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.508996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.509203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.509228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.509395] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.509420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.509625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.509650] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.509833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.509858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.510049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.510076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.510260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.510285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.510487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.510517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.510719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.510744] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.510918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.510948] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.511128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.511154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.511358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.511384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.511561] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.511586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.511784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.511810] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.511997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.512023] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.512233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.512258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.512427] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.512452] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.512627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.512652] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.512854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.512879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.513108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.513134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.513321] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.513346] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.513527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.513554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.513753] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.513778] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.513991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.514017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.514219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.514244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.514428] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.514454] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.514658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.514683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.514885] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.514910] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.341 qpair failed and we were unable to recover it. 00:22:09.341 [2024-05-15 11:02:25.515126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.341 [2024-05-15 11:02:25.515153] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.515365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.515391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.515595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.515621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.515790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.515816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.516027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.516053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.516259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.516285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.516486] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.516512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.516717] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.516743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.516948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.516974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.517164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.517190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.517377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.517405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.517646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.517672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.517842] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.517868] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.518049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.518075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.518278] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.518303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.518500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.518526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.518715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.518740] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.518940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.518966] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.519140] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.519166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.519382] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.519410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.342 [2024-05-15 11:02:25.519636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.342 [2024-05-15 11:02:25.519662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.342 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.519853] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.519878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.520057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.520083] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.520288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.520314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.520537] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.520563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.520744] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.520770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.520948] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.520974] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.521153] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.521179] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.521355] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.521381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.521558] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.521584] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.521793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.521819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.522038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.522066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.522269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.522295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.522471] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.522496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.522708] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.522734] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.522912] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.522944] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.523134] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.523160] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.523374] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.523400] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.523575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.523601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.523780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.523807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.524043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.524069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.524275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.613 [2024-05-15 11:02:25.524300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.613 qpair failed and we were unable to recover it. 00:22:09.613 [2024-05-15 11:02:25.524529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.524555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.524757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.524783] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.524969] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.524995] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.525177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.525203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.525377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.525402] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.525582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.525612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.525793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.525818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.526020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.526046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.526246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.526271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.526500] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.526525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.526724] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.526750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.526955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.526981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.527167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.527192] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.527394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.527419] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.527627] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.527654] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.527870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.527895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.528078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.528104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.528280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.528305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.528489] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.528515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.528703] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.528728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.528905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.528945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.529125] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.529150] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.529331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.529357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.529526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.529551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.529780] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.529805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.529976] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.530002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.530180] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.530205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.530387] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.530412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.530599] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.530626] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.530831] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.530856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.531072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.531098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.531308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.531333] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.531573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.531603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.531804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.531830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.532041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.532066] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.532274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.532300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.532484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.532510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.532692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.532717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.614 qpair failed and we were unable to recover it. 00:22:09.614 [2024-05-15 11:02:25.532900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.614 [2024-05-15 11:02:25.532926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.533115] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.533140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.533352] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.533377] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.533588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.533613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.533797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.533822] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.533995] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.534022] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.534245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.534271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.534440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.534465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.534650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.534676] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.534850] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.534874] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.535063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.535089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.535270] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.535295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.535482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.535507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.535687] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.535712] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.535902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.535927] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.536110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.536135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.536344] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.536370] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.536575] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.536600] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.536804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.536829] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.537008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.537035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.537207] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.537232] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.537398] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.537423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.537634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.537660] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.537867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.537893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.538072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.538098] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.538274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.538299] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.538497] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.538522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.538696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.538721] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.538955] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.538982] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.539187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.539212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.539381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.539407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.539616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.539641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.539841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.539866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.540042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.540068] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.540244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.540270] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.540454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.540479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.540647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.540672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.540868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.540893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.541108] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.541134] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.615 [2024-05-15 11:02:25.541348] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.615 [2024-05-15 11:02:25.541374] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.615 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.541546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.541571] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.541745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.541771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.541950] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.541976] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.542191] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.542216] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.542396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.542421] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.542598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.542623] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.542800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.542825] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.543034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.543060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.543240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.543265] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.543469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.543494] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.543676] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.543701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.543911] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.543950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.544133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.544158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.544332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.544358] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.544563] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.544589] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.544765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.544790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.545027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.545053] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.545255] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.545280] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.545451] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.545476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.545685] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.545711] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.545921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.545953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.546156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.546181] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.546366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.546398] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.546572] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.546597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.546787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.546812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.547013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.547040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.547247] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.547272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.547449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.547475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.547677] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.547703] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.547910] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.547940] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.548150] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.548176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.548394] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.548420] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.548597] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.548624] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.548793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.548819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.549004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.549030] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.549203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.549228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.549431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.549458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.549672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.549697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.616 [2024-05-15 11:02:25.549869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.616 [2024-05-15 11:02:25.549895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.616 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.550087] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.550113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.550293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.550318] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.550494] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.550521] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.550698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.550724] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.550897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.550923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.551142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.551168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.551347] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.551373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.551577] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.551603] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.551786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.551812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.552013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.552040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.552246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.552276] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.552448] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.552473] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.552638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.552664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.552869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.552894] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.553070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.553096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.553287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.553313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.553491] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.553517] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.553698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.553723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.553906] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.553936] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.554147] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.554173] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.554386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.554412] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.554595] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.554621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.554796] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.554821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.555022] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.555048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.555232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.555258] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.555465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.555491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.555668] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.555694] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.555898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.555923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.556111] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.556137] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.556338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.556364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.556540] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.556565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.556737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.556762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.556974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.617 [2024-05-15 11:02:25.557000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.617 qpair failed and we were unable to recover it. 00:22:09.617 [2024-05-15 11:02:25.557179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.557206] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.557414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.557441] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.557641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.557666] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.557867] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.557893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.558080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.558110] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.558286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.558312] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.558490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.558515] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.558699] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.558725] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.558905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.558934] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.559114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.559140] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.559343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.559369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.559548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.559574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.559751] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.559776] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.559984] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.560011] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.560201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.560226] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.560412] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.560438] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.560618] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.560643] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.560852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.560877] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.561090] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.561117] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.561332] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.561357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.561568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.561593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.561772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.561797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.561997] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.562024] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.562234] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.562259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.562436] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.562461] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.562646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.562672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.562852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.562879] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.563091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.563118] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.563324] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.563349] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.563524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.563549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.563728] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.563753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.563920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.563950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.564164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.564190] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.564420] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.564446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.564662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.564687] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.564874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.564900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.565088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.565114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.618 [2024-05-15 11:02:25.565315] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.618 [2024-05-15 11:02:25.565341] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.618 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.565560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.565586] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.565772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.565797] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.566010] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.566037] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.566223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.566248] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.566424] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.566451] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.566664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.566691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.566884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.566913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.567105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.567131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.567342] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.567368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.567551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.567577] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.567784] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.567809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.567993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.568020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.568237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.568263] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.568443] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.568469] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.568662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.568688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.568871] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.568900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.569088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.569115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.569299] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.569325] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.569496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.569522] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.569696] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.569722] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.569922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.569954] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.570158] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.570185] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.570390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.570415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.570624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.570649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.570830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.570858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.571043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.571070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.571275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.571301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.571508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.571533] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.571711] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.571737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.571946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.571973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.572189] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.572214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.572442] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.572467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.572672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.572697] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.572875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.572900] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.573079] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.573109] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.573300] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.573326] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.573546] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.573572] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.573763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.573790] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.573971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.619 [2024-05-15 11:02:25.573996] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.619 qpair failed and we were unable to recover it. 00:22:09.619 [2024-05-15 11:02:25.574182] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.574212] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.574397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.574424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.574626] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.574653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.574864] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.574890] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.575104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.575131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.575359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.575384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.575590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.575616] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.575804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.575830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.576015] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.576041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.576233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.576262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.576479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.576505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.576692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.576719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.576940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.576967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.577148] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.577174] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.577359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.577386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.577573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.577601] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.577793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.577818] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.578016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.578043] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.578249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.578275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.578487] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.578512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.578704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.578743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.578946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.578973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.579174] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.579204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.579419] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.579446] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.579617] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.579642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.579828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.579854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.580040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.580067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.580249] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.580275] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.580477] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.580502] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.580715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.580743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.580971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.580998] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.581226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.581252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.581459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.581485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.581684] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.581710] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.581894] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.581920] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.582126] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.582152] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.582346] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.582373] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.582549] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.582574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.582835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.582861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.620 qpair failed and we were unable to recover it. 00:22:09.620 [2024-05-15 11:02:25.583103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.620 [2024-05-15 11:02:25.583129] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.583322] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.583350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.583530] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.583556] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.583765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.583791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.584008] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.584034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.584208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.584234] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.584441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.584467] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.584651] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.584677] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.584852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.584878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.585072] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.585100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.585287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.585313] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.585496] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.585520] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.585704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.585730] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.585905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.585949] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.586128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.586157] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.586379] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.586405] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.586616] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.586641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.586829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.586855] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.587053] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.587080] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.587259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.587285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.587459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.587485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.587691] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.587717] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.587918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.587952] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.588162] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.588191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.588404] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.588430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.588635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.588661] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.588836] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.588861] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.589078] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.589104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.589302] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.589329] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.589510] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.589536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.589745] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.589771] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.589964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.589990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.590178] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.590204] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.590383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.590408] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.590592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.590618] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.590789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.590814] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.590990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.591017] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.591202] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.591228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.591452] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.591478] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.621 [2024-05-15 11:02:25.591688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.621 [2024-05-15 11:02:25.591713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.621 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.591923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.591969] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.592151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.592177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.592367] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.592393] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.592570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.592596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.592778] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.592807] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.592990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.593016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.593232] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.593259] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.593447] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.593476] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.593662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.593689] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.593869] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.593895] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.594086] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.594113] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.594290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.594320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.594523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.594549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.594727] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.594753] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.594960] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.622 [2024-05-15 11:02:25.594986] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.622 qpair failed and we were unable to recover it. 00:22:09.622 [2024-05-15 11:02:25.595194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.595221] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.595437] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.595463] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.595664] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.595690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.595893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.595919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.596105] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.596131] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.596334] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.596359] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.596556] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.596582] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.596783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.596809] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.597056] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.597082] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.597265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.597290] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.597475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.597501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.597702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.597728] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.597922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.597953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.598151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.598177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.598381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.598407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.598593] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.598619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.598828] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.623 [2024-05-15 11:02:25.598856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.623 qpair failed and we were unable to recover it. 00:22:09.623 [2024-05-15 11:02:25.599063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.599090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.599275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.599301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.599513] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.599539] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.599729] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.599755] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.599952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.599990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.600171] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.600198] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.600399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.600430] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.600615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.600641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.600812] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.600837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.601039] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.601065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.601269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.601295] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.601474] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.601501] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.601697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.601726] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.601900] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.601926] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.602117] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.602143] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.602316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.602342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.602524] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.602549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.602722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.602746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.602921] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.602953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.603142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.603168] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.603376] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.603403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.603611] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.603638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.603821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.603846] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.604088] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.604115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.604290] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.604315] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.604523] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.604549] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.604734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.604760] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.604940] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.604967] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.605151] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.605176] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.605358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.605384] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.605569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.605595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.605806] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.605832] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.606038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.606065] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.606263] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.606293] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.606534] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.624 [2024-05-15 11:02:25.606560] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.624 qpair failed and we were unable to recover it. 00:22:09.624 [2024-05-15 11:02:25.606768] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.606794] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.607013] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.607040] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.607226] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.607253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.607429] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.607456] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.607633] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.607659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.607858] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.607884] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.608062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.608089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.608340] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.608365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.608538] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.608563] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.608741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.608767] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.608951] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.608977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.609167] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.609193] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.609380] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.609406] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.609587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.609612] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.609807] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.609833] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.610027] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.610055] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.610237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.610264] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.610454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.610480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.610661] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.610686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.610899] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.610925] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.611119] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.611145] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.611386] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.611411] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.611589] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.611615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.611826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.611852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.612034] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.612060] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.612244] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.612269] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.612454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.612480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.612671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.612696] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.612879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.612905] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.613104] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.613130] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.613307] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.613334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.613505] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.613531] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.613707] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.613732] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.625 [2024-05-15 11:02:25.613920] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.625 [2024-05-15 11:02:25.613962] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.625 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.614133] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.614159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.614366] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.614392] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.614596] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.614622] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.614811] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.614837] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.615031] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.615058] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.615286] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.615327] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.615554] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.615581] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.615819] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.615845] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.616051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.616078] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.616288] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.616314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.616502] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.616530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.616713] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.616741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.616959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.616988] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.617168] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.617195] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.617397] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.617423] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.617663] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.617690] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.617897] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.617923] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.618110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.618138] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.618325] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.618352] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.618569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.618595] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.618804] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.618830] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.619043] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.619070] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.619312] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.619337] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.619508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.619534] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.619725] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.619752] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.619946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.619973] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.620156] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.620182] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.620378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.620404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.620615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.620642] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.620830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.620857] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.621074] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.621104] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.621287] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.621314] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.621539] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.621565] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.621737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.621762] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.621975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.622003] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.626 qpair failed and we were unable to recover it. 00:22:09.626 [2024-05-15 11:02:25.622183] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.626 [2024-05-15 11:02:25.622208] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3540000b90 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.622414] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.622442] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.622641] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.622667] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.622852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.622878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.623064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.623091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.623276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.623302] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.623490] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.623516] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.623732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.623758] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.623973] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.624000] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.624177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.624202] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.624417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.624448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.624623] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.624648] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.624861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.624887] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.625070] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.625097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.625308] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.625334] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.625542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.625567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.625752] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.625777] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.625954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.625980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.626173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.626203] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.626432] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.626458] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.626665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.626691] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.626863] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.626889] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.627076] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.627103] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.627295] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.627321] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.627504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.627530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.627714] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.627741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.627909] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.627943] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.628130] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.628158] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.628343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.628368] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.628550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.628576] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.628756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.628782] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.628990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.629016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.629224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.629250] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.627 [2024-05-15 11:02:25.629465] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.627 [2024-05-15 11:02:25.629491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.627 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.629698] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.629723] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.629905] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.629937] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.630121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.630148] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.630327] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.630357] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.630559] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.630588] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.630790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.630816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.631009] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.631035] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.631223] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.631251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.631422] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.631448] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.631658] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.631683] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.631856] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.631882] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.632062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.632089] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.632261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.632286] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.632457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.632484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.632660] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.632686] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.632874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.632899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.633084] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.633114] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0d420 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 A controller has encountered a failure and is being reset. 00:22:09.628 [2024-05-15 11:02:25.633341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.633381] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.633569] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.633597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.633810] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.633836] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.634018] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.634046] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.634224] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.634251] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.634470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.634496] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.634680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.634706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.634888] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.634916] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.635109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.635135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.635354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.635380] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.635568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.635594] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.635766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.635792] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.635990] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.636018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.636211] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.628 [2024-05-15 11:02:25.636253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.628 qpair failed and we were unable to recover it. 00:22:09.628 [2024-05-15 11:02:25.636460] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.636486] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.636666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.636692] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.636870] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.636896] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.637109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.637136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.637339] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.637365] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.637583] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.637608] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.637793] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.637819] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.638024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.638052] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.638245] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.638271] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.638454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.638480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.638662] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.638688] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.638868] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.638893] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.639114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.639142] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.639323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.639350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.639548] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.639574] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.639767] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.639795] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.639974] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.640001] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.640218] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.640244] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.640454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.640481] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.640681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.640707] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.640915] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.640946] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.641121] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.641147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.641336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.641363] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.641573] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.641598] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.641769] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.641796] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.642000] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.642028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.642240] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.642266] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.642440] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.642466] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.642675] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.642701] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.629 qpair failed and we were unable to recover it. 00:22:09.629 [2024-05-15 11:02:25.642883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.629 [2024-05-15 11:02:25.642909] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.643094] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.643121] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.643304] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.643332] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.643503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.643530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.643739] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.643766] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.643978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.644005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.644201] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.644228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.644411] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.644437] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.644612] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.644638] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.644821] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.644847] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.645045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.645076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.645283] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.645310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.645529] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.645555] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.645747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.645775] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.645965] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.645993] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.646205] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.646231] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.646431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.646457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.646632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.646658] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.646841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.646867] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.647066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.647093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.647292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.647320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.647535] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.647561] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.647765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.647791] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.647975] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.648002] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.648188] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.648214] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.648399] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.648424] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.648632] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.648659] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.648839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.648866] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.649057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.649084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.649264] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.649291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.649484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.649512] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.649689] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.649715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.649898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.649924] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.650137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.650164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.650358] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.650383] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.650587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.650613] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.650790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.650816] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.651001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.651028] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.651214] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.630 [2024-05-15 11:02:25.651240] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.630 qpair failed and we were unable to recover it. 00:22:09.630 [2024-05-15 11:02:25.651449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.651475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.651655] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.651682] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.651854] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.651881] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.652071] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.652097] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.652284] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.652310] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.652521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.652547] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.652719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.652745] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.652923] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.652961] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.653173] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.653199] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.653401] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.653428] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.653625] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.653653] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.653875] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.653906] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3530000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.654135] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.654177] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.654423] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.654450] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.654635] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.654662] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.654839] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.654865] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.655067] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.655095] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.655280] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.655317] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.655508] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.655536] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.655733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.655761] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.655954] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.655981] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.656159] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.656186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.656359] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.656386] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.656622] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.656649] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.656826] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.656852] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.657066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.657094] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.657274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.657301] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.657504] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.657530] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.657741] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.657768] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.657988] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.658016] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.658197] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.658224] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.658431] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.658457] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.658638] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.658664] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.658957] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.658984] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.659161] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.659187] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.659365] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.659391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.659578] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.659606] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.631 [2024-05-15 11:02:25.659786] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.631 [2024-05-15 11:02:25.659813] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.631 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.660057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.660084] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.660261] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.660287] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.660461] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.660487] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.660693] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.660719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.660889] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.660915] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.661138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.661164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.661378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.661404] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.661581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.661609] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.661813] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.661840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.662024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.662051] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.662259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.662285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.662459] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.662485] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.662672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.662698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.662908] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.662945] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.663127] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.663154] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.663383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.663410] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.663702] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.663729] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.664020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.664048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.664257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.664283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.664493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.664519] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.664694] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.664720] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.664926] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.664959] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.665136] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.665161] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.665361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.665387] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.665621] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.665647] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.665830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.665856] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.666036] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.666062] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.666254] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.666281] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.666454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.666479] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.666690] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.666715] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.666896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.666922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.667120] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.667147] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.667323] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.667350] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.667527] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.667554] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.667763] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.667789] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.667964] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.667990] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.668179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.668205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.668373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.668399] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.632 [2024-05-15 11:02:25.668613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.632 [2024-05-15 11:02:25.668639] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.632 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.668817] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.668844] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.669064] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.669091] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.669294] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.669320] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.669555] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.669580] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.669749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.669774] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.669994] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.670020] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.670229] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.670255] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.670466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.670491] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.670666] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.670693] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.670896] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.670922] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.671137] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.671164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.671336] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.671362] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.671567] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.671593] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.671825] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.671851] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.672038] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.672069] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.672246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.672272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.672454] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.672480] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.672659] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.672685] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.672924] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.672956] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.673142] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.673167] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.673390] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.673418] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.673604] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.673629] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.673835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.673862] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.674047] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.674074] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.674253] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.674279] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.674479] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.674505] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.674680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.674705] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.674946] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.674972] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.675154] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.675180] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.675361] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.675389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.675594] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.633 [2024-05-15 11:02:25.675621] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.633 qpair failed and we were unable to recover it. 00:22:09.633 [2024-05-15 11:02:25.675827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.675853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.676066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.676093] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.676343] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.676369] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.676570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.676596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.676789] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.676817] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.677032] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.677059] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.677274] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.677300] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.677484] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.677510] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.677715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.677741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.677925] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.677957] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.678138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.678164] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.678350] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.678376] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.678588] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.678615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.678833] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.678860] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.679048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.679075] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.679260] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.679285] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.679485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.679511] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.679719] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.679746] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.679952] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.679980] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.680184] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.680210] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.680389] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.680415] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.680598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.680625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.680832] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.680858] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.681066] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.681096] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.681279] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.681305] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.681478] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.681504] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.681715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.681741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.681949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.681977] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.682160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.682188] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.682381] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.682407] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.682590] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.682615] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.682827] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.682853] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.683063] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.683090] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.683276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.683303] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.683506] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.683532] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.683743] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.683769] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.683978] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.684005] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.684187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.684213] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.684396] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.634 [2024-05-15 11:02:25.684422] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.634 qpair failed and we were unable to recover it. 00:22:09.634 [2024-05-15 11:02:25.684614] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.684641] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.684814] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.684840] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.685048] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.685076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.685268] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.685294] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.685499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.685526] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.685760] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.685787] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.686004] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.686031] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.686265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.686291] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.686469] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.686495] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.686680] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.686706] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.686887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.686913] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.687138] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.687166] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.687377] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.687403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.687606] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.687632] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.687844] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.687871] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.688049] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.688076] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.688257] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.688283] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.688488] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.688513] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.688715] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.688741] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.688949] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.688975] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.689179] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.689205] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.689417] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.689443] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.689619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.689646] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.689852] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.689878] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.690082] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.690115] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.690291] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.690316] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.690499] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.690525] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.690710] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.690737] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.691026] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.691054] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.691269] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.691296] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.691480] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.691507] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.691718] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.691743] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.691918] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.691953] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.692139] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.692165] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.692341] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.692367] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.692542] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.692568] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.692779] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.692805] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.693005] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.635 [2024-05-15 11:02:25.693032] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.635 qpair failed and we were unable to recover it. 00:22:09.635 [2024-05-15 11:02:25.693219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.693246] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.693457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.693483] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.693669] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.693695] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.693874] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.693899] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.694109] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.694136] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.694316] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.694342] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.694541] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.694567] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.694774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.694799] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.694991] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.695018] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.695203] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.695228] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.695433] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.695460] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.695646] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.695672] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.695877] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.695903] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.696098] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.696126] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.696368] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.696394] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.696600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.696625] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.696801] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.696828] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.697021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.697048] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.697227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.697252] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.697439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.697465] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.697672] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.697698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.697916] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.697950] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.698164] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.698191] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.698378] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.698403] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.698619] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.698645] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.698829] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.698854] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.699042] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.699072] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.699246] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.699272] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.699458] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.699484] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.699692] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.699719] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.699903] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.699935] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.700110] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.700135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.700338] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.700364] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.700565] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.700592] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.700797] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.700824] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.701007] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.701034] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.701236] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.701262] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.701449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.701475] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.636 [2024-05-15 11:02:25.701686] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.636 [2024-05-15 11:02:25.701713] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.636 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.701893] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.701919] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.702132] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.702159] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.702362] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.702389] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.702592] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.702619] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.702795] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.702821] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.703020] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.703047] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.703227] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.703253] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.703525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.703551] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.703734] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.703759] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.703962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.703989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.704160] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.704186] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.704364] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.704391] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.704568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.704596] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.704775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.704801] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f3538000b90 with addr=10.0.0.2, port=4420 00:22:09.637 qpair failed and we were unable to recover it. 00:22:09.637 [2024-05-15 11:02:25.705046] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.637 [2024-05-15 11:02:25.705100] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e0a0b0 with addr=10.0.0.2, port=4420 00:22:09.637 [2024-05-15 11:02:25.705121] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0a0b0 is same with the state(5) to be set 00:22:09.637 [2024-05-15 11:02:25.705146] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0a0b0 (9): Bad file descriptor 00:22:09.637 [2024-05-15 11:02:25.705166] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:09.637 [2024-05-15 11:02:25.705180] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:09.637 [2024-05-15 11:02:25.705197] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:09.637 Unable to reset the controller. 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # return 0 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.637 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.896 Malloc0 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.896 [2024-05-15 11:02:25.850619] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.896 [2024-05-15 11:02:25.878608] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:09.896 [2024-05-15 11:02:25.878882] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.896 11:02:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@58 -- # wait 2891623 00:22:10.829 Controller properly reset. 00:22:15.007 Initializing NVMe Controllers 00:22:15.008 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:15.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:22:15.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:22:15.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:22:15.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:22:15.008 Initialization complete. Launching workers. 00:22:15.008 Starting thread on core 1 00:22:15.008 Starting thread on core 2 00:22:15.008 Starting thread on core 3 00:22:15.008 Starting thread on core 0 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@59 -- # sync 00:22:15.008 00:22:15.008 real 0m10.726s 00:22:15.008 user 0m30.003s 00:22:15.008 sys 0m7.794s 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:15.008 ************************************ 00:22:15.008 END TEST nvmf_target_disconnect_tc2 00:22:15.008 ************************************ 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@85 -- # nvmftestfini 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:15.008 rmmod nvme_tcp 00:22:15.008 rmmod nvme_fabrics 00:22:15.008 rmmod nvme_keyring 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2892148 ']' 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2892148 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@946 -- # '[' -z 2892148 ']' 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # kill -0 2892148 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # uname 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2892148 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # process_name=reactor_4 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # '[' reactor_4 = sudo ']' 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2892148' 00:22:15.008 killing process with pid 2892148 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@965 -- # kill 2892148 00:22:15.008 [2024-05-15 11:02:31.168525] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:15.008 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # wait 2892148 00:22:15.266 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:15.266 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:15.266 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:15.266 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:15.266 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:15.266 11:02:31 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:15.266 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:15.266 11:02:31 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.800 11:02:33 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:17.800 00:22:17.800 real 0m15.931s 00:22:17.800 user 0m55.206s 00:22:17.800 sys 0m10.644s 00:22:17.800 11:02:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:17.800 11:02:33 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:22:17.800 ************************************ 00:22:17.800 END TEST nvmf_target_disconnect 00:22:17.800 ************************************ 00:22:17.800 11:02:33 nvmf_tcp -- nvmf/nvmf.sh@124 -- # timing_exit host 00:22:17.800 11:02:33 nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:17.800 11:02:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:17.800 11:02:33 nvmf_tcp -- nvmf/nvmf.sh@126 -- # trap - SIGINT SIGTERM EXIT 00:22:17.800 00:22:17.800 real 16m50.280s 00:22:17.800 user 39m51.778s 00:22:17.800 sys 4m51.777s 00:22:17.800 11:02:33 nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:22:17.800 11:02:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:17.800 ************************************ 00:22:17.800 END TEST nvmf_tcp 00:22:17.800 ************************************ 00:22:17.800 11:02:33 -- spdk/autotest.sh@12 -- # hostname 00:22:17.800 11:02:33 -- spdk/autotest.sh@12 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_tcp.info 00:22:17.800 geninfo: WARNING: invalid characters removed from testname! 00:22:49.886 11:03:00 -- spdk/autotest.sh@13 -- # echo '### URING mentions in coverage after the test ###:' 00:22:49.886 ### URING mentions in coverage after the test ###: 00:22:49.886 11:03:00 -- spdk/autotest.sh@14 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_tcp.info 00:22:49.886 11:03:00 -- spdk/autotest.sh@14 -- # grep -i uring 00:22:49.886 11:03:00 -- spdk/autotest.sh@15 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_tcp.info 00:22:49.886 11:03:00 -- spdk/autotest.sh@297 -- # [[ 0 -eq 0 ]] 00:22:49.886 11:03:00 -- spdk/autotest.sh@298 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:49.886 11:03:00 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:22:49.886 11:03:00 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:22:49.886 11:03:00 -- common/autotest_common.sh@10 -- # set +x 00:22:49.886 ************************************ 00:22:49.886 START TEST spdkcli_nvmf_tcp 00:22:49.886 ************************************ 00:22:49.886 11:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:22:49.886 * Looking for test storage... 00:22:49.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2896885 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2896885 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@827 -- # '[' -z 2896885 ']' 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local max_retries=100 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # xtrace_disable 00:22:49.886 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.887 [2024-05-15 11:03:01.111302] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:22:49.887 [2024-05-15 11:03:01.111393] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896885 ] 00:22:49.887 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.887 [2024-05-15 11:03:01.186381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:49.887 [2024-05-15 11:03:01.309958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:49.887 [2024-05-15 11:03:01.309978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # return 0 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:49.887 11:03:01 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:22:49.887 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:22:49.887 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:22:49.887 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:22:49.887 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:22:49.887 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:22:49.887 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:22:49.887 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:49.887 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:49.887 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:22:49.887 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:22:49.887 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:22:49.887 ' 00:22:49.887 [2024-05-15 11:03:04.002825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:49.887 [2024-05-15 11:03:05.242679] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:49.887 [2024-05-15 11:03:05.243273] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:22:51.786 [2024-05-15 11:03:07.534407] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:22:53.685 [2024-05-15 11:03:09.512625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:22:55.059 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:22:55.059 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:22:55.059 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:22:55.059 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:22:55.059 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:22:55.059 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:22:55.059 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:22:55.059 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:55.059 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:55.059 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:22:55.059 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:22:55.059 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:22:55.059 11:03:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:22:55.059 11:03:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.059 11:03:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.059 11:03:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:22:55.059 11:03:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:55.059 11:03:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.059 11:03:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:22:55.059 11:03:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:22:55.317 11:03:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:22:55.576 11:03:11 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:22:55.576 11:03:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:22:55.576 11:03:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:55.576 11:03:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.576 11:03:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:22:55.576 11:03:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@720 -- # xtrace_disable 00:22:55.576 11:03:11 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:55.576 11:03:11 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:22:55.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:22:55.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:55.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:22:55.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:22:55.576 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:22:55.576 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:22:55.576 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:22:55.576 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:22:55.576 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:22:55.576 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:22:55.576 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:22:55.576 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:22:55.576 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:22:55.576 ' 00:23:00.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:23:00.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:23:00.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:00.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:23:00.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:23:00.842 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:23:00.842 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:23:00.842 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:23:00.842 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:23:00.842 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:23:00.842 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:23:00.842 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:23:00.842 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:23:00.842 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2896885 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2896885 ']' 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2896885 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # uname 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:00.842 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2896885 00:23:00.843 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:00.843 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:00.843 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2896885' 00:23:00.843 killing process with pid 2896885 00:23:00.843 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # kill 2896885 00:23:00.843 [2024-05-15 11:03:16.851654] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:00.843 11:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # wait 2896885 00:23:01.101 11:03:17 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:23:01.101 11:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:23:01.101 11:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2896885 ']' 00:23:01.101 11:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2896885 00:23:01.101 11:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@946 -- # '[' -z 2896885 ']' 00:23:01.101 11:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # kill -0 2896885 00:23:01.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2896885) - No such process 00:23:01.101 11:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # echo 'Process with pid 2896885 is not found' 00:23:01.101 Process with pid 2896885 is not found 00:23:01.102 11:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:23:01.102 11:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:23:01.102 11:03:17 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:23:01.102 00:23:01.102 real 0m16.121s 00:23:01.102 user 0m33.952s 00:23:01.102 sys 0m0.857s 00:23:01.102 11:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:01.102 11:03:17 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:01.102 ************************************ 00:23:01.102 END TEST spdkcli_nvmf_tcp 00:23:01.102 ************************************ 00:23:01.102 11:03:17 -- spdk/autotest.sh@299 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:01.102 11:03:17 -- common/autotest_common.sh@1097 -- # '[' 3 -le 1 ']' 00:23:01.102 11:03:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:01.102 11:03:17 -- common/autotest_common.sh@10 -- # set +x 00:23:01.102 ************************************ 00:23:01.102 START TEST nvmf_identify_passthru 00:23:01.102 ************************************ 00:23:01.102 11:03:17 nvmf_identify_passthru -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:23:01.102 * Looking for test storage... 00:23:01.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:01.102 11:03:17 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.102 11:03:17 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.102 11:03:17 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.102 11:03:17 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:01.102 11:03:17 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:01.102 11:03:17 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:01.102 11:03:17 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:01.102 11:03:17 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:23:01.102 11:03:17 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:01.102 11:03:17 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.102 11:03:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:01.102 11:03:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:01.102 11:03:17 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:23:01.102 11:03:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:03.634 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:03.634 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:03.634 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:03.634 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:03.634 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:03.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:23:03.635 00:23:03.635 --- 10.0.0.2 ping statistics --- 00:23:03.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.635 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:23:03.635 00:23:03.635 --- 10.0.0.1 ping statistics --- 00:23:03.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.635 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:03.635 11:03:19 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:03.635 11:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:03.635 11:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # bdfs=() 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1520 -- # local bdfs 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=($(get_nvme_bdfs)) 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # get_nvme_bdfs 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # jq -r '.config[].params.traddr' 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # (( 1 == 0 )) 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # printf '%s\n' 0000:88:00.0 00:23:03.635 11:03:19 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # echo 0000:88:00.0 00:23:03.635 11:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:23:03.635 11:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:23:03.635 11:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:23:03.635 11:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:23:03.635 11:03:19 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:23:03.635 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.819 11:03:23 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:23:07.820 11:03:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:23:07.820 11:03:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:23:07.820 11:03:23 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:23:07.820 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.004 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:23:12.004 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:12.004 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:12.004 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2902261 00:23:12.004 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:12.004 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:12.004 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2902261 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@827 -- # '[' -z 2902261 ']' 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:12.004 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:12.004 [2024-05-15 11:03:28.222373] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:23:12.004 [2024-05-15 11:03:28.222490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.262 EAL: No free 2048 kB hugepages reported on node 1 00:23:12.262 [2024-05-15 11:03:28.301784] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:12.262 [2024-05-15 11:03:28.408406] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.262 [2024-05-15 11:03:28.408474] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.262 [2024-05-15 11:03:28.408497] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:12.262 [2024-05-15 11:03:28.408508] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:12.262 [2024-05-15 11:03:28.408518] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.262 [2024-05-15 11:03:28.408600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.262 [2024-05-15 11:03:28.408665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:12.262 [2024-05-15 11:03:28.408734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.262 [2024-05-15 11:03:28.408731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:12.263 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:12.263 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # return 0 00:23:12.263 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:23:12.263 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.263 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:12.263 INFO: Log level set to 20 00:23:12.263 INFO: Requests: 00:23:12.263 { 00:23:12.263 "jsonrpc": "2.0", 00:23:12.263 "method": "nvmf_set_config", 00:23:12.263 "id": 1, 00:23:12.263 "params": { 00:23:12.263 "admin_cmd_passthru": { 00:23:12.263 "identify_ctrlr": true 00:23:12.263 } 00:23:12.263 } 00:23:12.263 } 00:23:12.263 00:23:12.263 INFO: response: 00:23:12.263 { 00:23:12.263 "jsonrpc": "2.0", 00:23:12.263 "id": 1, 00:23:12.263 "result": true 00:23:12.263 } 00:23:12.263 00:23:12.263 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.263 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:23:12.263 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.263 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:12.263 INFO: Setting log level to 20 00:23:12.263 INFO: Setting log level to 20 00:23:12.263 INFO: Log level set to 20 00:23:12.263 INFO: Log level set to 20 00:23:12.263 INFO: Requests: 00:23:12.263 { 00:23:12.263 "jsonrpc": "2.0", 00:23:12.263 "method": "framework_start_init", 00:23:12.263 "id": 1 00:23:12.263 } 00:23:12.263 00:23:12.263 INFO: Requests: 00:23:12.263 { 00:23:12.263 "jsonrpc": "2.0", 00:23:12.263 "method": "framework_start_init", 00:23:12.263 "id": 1 00:23:12.263 } 00:23:12.263 00:23:12.522 [2024-05-15 11:03:28.535303] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:23:12.522 INFO: response: 00:23:12.522 { 00:23:12.522 "jsonrpc": "2.0", 00:23:12.522 "id": 1, 00:23:12.522 "result": true 00:23:12.522 } 00:23:12.522 00:23:12.522 INFO: response: 00:23:12.522 { 00:23:12.522 "jsonrpc": "2.0", 00:23:12.522 "id": 1, 00:23:12.522 "result": true 00:23:12.522 } 00:23:12.522 00:23:12.522 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.522 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:12.522 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.522 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:12.522 INFO: Setting log level to 40 00:23:12.522 INFO: Setting log level to 40 00:23:12.522 INFO: Setting log level to 40 00:23:12.522 [2024-05-15 11:03:28.545386] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.522 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:12.522 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:23:12.522 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:12.522 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:12.522 11:03:28 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:23:12.522 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:12.522 11:03:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.838 Nvme0n1 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.838 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.838 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.838 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.838 [2024-05-15 11:03:31.447195] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:15.838 [2024-05-15 11:03:31.447516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.838 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.838 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.838 [ 00:23:15.838 { 00:23:15.838 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:15.838 "subtype": "Discovery", 00:23:15.838 "listen_addresses": [], 00:23:15.838 "allow_any_host": true, 00:23:15.838 "hosts": [] 00:23:15.838 }, 00:23:15.838 { 00:23:15.838 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.838 "subtype": "NVMe", 00:23:15.838 "listen_addresses": [ 00:23:15.838 { 00:23:15.838 "trtype": "TCP", 00:23:15.838 "adrfam": "IPv4", 00:23:15.838 "traddr": "10.0.0.2", 00:23:15.838 "trsvcid": "4420" 00:23:15.838 } 00:23:15.838 ], 00:23:15.838 "allow_any_host": true, 00:23:15.838 "hosts": [], 00:23:15.838 "serial_number": "SPDK00000000000001", 00:23:15.838 "model_number": "SPDK bdev Controller", 00:23:15.838 "max_namespaces": 1, 00:23:15.838 "min_cntlid": 1, 00:23:15.838 "max_cntlid": 65519, 00:23:15.838 "namespaces": [ 00:23:15.838 { 00:23:15.838 "nsid": 1, 00:23:15.839 "bdev_name": "Nvme0n1", 00:23:15.839 "name": "Nvme0n1", 00:23:15.839 "nguid": "237A5D1243D44DE58A80BE5216B7C85B", 00:23:15.839 "uuid": "237a5d12-43d4-4de5-8a80-be5216b7c85b" 00:23:15.839 } 00:23:15.839 ] 00:23:15.839 } 00:23:15.839 ] 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:23:15.839 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:23:15.839 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:23:15.839 11:03:31 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:15.839 rmmod nvme_tcp 00:23:15.839 rmmod nvme_fabrics 00:23:15.839 rmmod nvme_keyring 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2902261 ']' 00:23:15.839 11:03:31 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2902261 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@946 -- # '[' -z 2902261 ']' 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # kill -0 2902261 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # uname 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2902261 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2902261' 00:23:15.839 killing process with pid 2902261 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # kill 2902261 00:23:15.839 [2024-05-15 11:03:31.879648] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:15.839 11:03:31 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # wait 2902261 00:23:17.736 11:03:33 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:17.736 11:03:33 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:17.736 11:03:33 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:17.736 11:03:33 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.736 11:03:33 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.736 11:03:33 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.736 11:03:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:17.736 11:03:33 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.641 11:03:35 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:19.641 00:23:19.641 real 0m18.335s 00:23:19.641 user 0m26.601s 00:23:19.641 sys 0m2.619s 00:23:19.641 11:03:35 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:19.641 11:03:35 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:23:19.641 ************************************ 00:23:19.641 END TEST nvmf_identify_passthru 00:23:19.641 ************************************ 00:23:19.641 11:03:35 -- spdk/autotest.sh@301 -- # run_test_wrapper nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:23:19.641 11:03:35 -- spdk/autotest.sh@10 -- # local test_name=nvmf_dif 00:23:19.641 11:03:35 -- spdk/autotest.sh@11 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:23:19.641 11:03:35 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:19.641 11:03:35 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:19.641 11:03:35 -- common/autotest_common.sh@10 -- # set +x 00:23:19.641 ************************************ 00:23:19.641 START TEST nvmf_dif 00:23:19.641 ************************************ 00:23:19.641 11:03:35 nvmf_dif -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:23:19.641 * Looking for test storage... 00:23:19.641 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:19.641 11:03:35 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:19.641 11:03:35 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:19.642 11:03:35 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:19.642 11:03:35 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:19.642 11:03:35 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:19.642 11:03:35 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.642 11:03:35 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.642 11:03:35 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.642 11:03:35 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:19.642 11:03:35 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:19.642 11:03:35 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:19.642 11:03:35 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:19.642 11:03:35 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:19.642 11:03:35 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:19.642 11:03:35 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.642 11:03:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:19.642 11:03:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:19.642 11:03:35 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:23:19.642 11:03:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:22.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.168 11:03:38 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:22.169 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:22.169 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:22.169 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:22.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:22.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:23:22.169 00:23:22.169 --- 10.0.0.2 ping statistics --- 00:23:22.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.169 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:22.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:22.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:23:22.169 00:23:22.169 --- 10.0.0.1 ping statistics --- 00:23:22.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:22.169 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:23:22.169 11:03:38 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:23.541 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:23.541 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:23.541 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:23.541 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:23.541 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:23.541 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:23.541 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:23.541 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:23.541 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:23.541 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:23.541 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:23.541 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:23.541 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:23.541 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:23.541 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:23.541 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:23.541 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:23.541 11:03:39 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:23.541 11:03:39 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:23.541 11:03:39 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:23.541 11:03:39 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:23.541 11:03:39 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:23.541 11:03:39 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:23.800 11:03:39 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:23.800 11:03:39 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:23.800 11:03:39 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.800 11:03:39 nvmf_dif -- common/autotest_common.sh@720 -- # xtrace_disable 00:23:23.800 11:03:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:23.800 11:03:39 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2906022 00:23:23.800 11:03:39 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:23.800 11:03:39 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2906022 00:23:23.800 11:03:39 nvmf_dif -- common/autotest_common.sh@827 -- # '[' -z 2906022 ']' 00:23:23.800 11:03:39 nvmf_dif -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.800 11:03:39 nvmf_dif -- common/autotest_common.sh@832 -- # local max_retries=100 00:23:23.800 11:03:39 nvmf_dif -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.800 11:03:39 nvmf_dif -- common/autotest_common.sh@836 -- # xtrace_disable 00:23:23.800 11:03:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:23.800 [2024-05-15 11:03:39.824559] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:23:23.800 [2024-05-15 11:03:39.824630] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.800 EAL: No free 2048 kB hugepages reported on node 1 00:23:23.800 [2024-05-15 11:03:39.903785] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.800 [2024-05-15 11:03:40.024252] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.800 [2024-05-15 11:03:40.024317] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.800 [2024-05-15 11:03:40.024334] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.800 [2024-05-15 11:03:40.024348] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.800 [2024-05-15 11:03:40.024360] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.800 [2024-05-15 11:03:40.024390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@860 -- # return 0 00:23:24.059 11:03:40 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 11:03:40 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.059 11:03:40 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:24.059 11:03:40 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 [2024-05-15 11:03:40.179975] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 11:03:40 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:24.059 11:03:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 ************************************ 00:23:24.059 START TEST fio_dif_1_default 00:23:24.059 ************************************ 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1121 -- # fio_dif_1 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 bdev_null0 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:24.059 [2024-05-15 11:03:40.244078] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:24.059 [2024-05-15 11:03:40.244328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:24.059 { 00:23:24.059 "params": { 00:23:24.059 "name": "Nvme$subsystem", 00:23:24.059 "trtype": "$TEST_TRANSPORT", 00:23:24.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:24.059 "adrfam": "ipv4", 00:23:24.059 "trsvcid": "$NVMF_PORT", 00:23:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:24.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:24.059 "hdgst": ${hdgst:-false}, 00:23:24.059 "ddgst": ${ddgst:-false} 00:23:24.059 }, 00:23:24.059 "method": "bdev_nvme_attach_controller" 00:23:24.059 } 00:23:24.059 EOF 00:23:24.059 )") 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # shift 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libasan 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:24.059 "params": { 00:23:24.059 "name": "Nvme0", 00:23:24.059 "trtype": "tcp", 00:23:24.059 "traddr": "10.0.0.2", 00:23:24.059 "adrfam": "ipv4", 00:23:24.059 "trsvcid": "4420", 00:23:24.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:24.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:24.059 "hdgst": false, 00:23:24.059 "ddgst": false 00:23:24.059 }, 00:23:24.059 "method": "bdev_nvme_attach_controller" 00:23:24.059 }' 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:24.059 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:24.318 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:24.318 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:24.318 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:24.318 11:03:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:24.318 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:24.318 fio-3.35 00:23:24.318 Starting 1 thread 00:23:24.318 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.520 00:23:36.520 filename0: (groupid=0, jobs=1): err= 0: pid=2906251: Wed May 15 11:03:51 2024 00:23:36.520 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10002msec) 00:23:36.520 slat (nsec): min=4392, max=42533, avg=9618.03, stdev=3284.76 00:23:36.520 clat (usec): min=41837, max=45615, avg=41994.19, stdev=255.18 00:23:36.520 lat (usec): min=41860, max=45630, avg=42003.81, stdev=255.07 00:23:36.520 clat percentiles (usec): 00:23:36.520 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:23:36.520 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:23:36.520 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:36.520 | 99.00th=[43254], 99.50th=[43254], 99.90th=[45876], 99.95th=[45876], 00:23:36.520 | 99.99th=[45876] 00:23:36.520 bw ( KiB/s): min= 352, max= 384, per=99.81%, avg=380.63, stdev=10.09, samples=19 00:23:36.520 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:23:36.520 lat (msec) : 50=100.00% 00:23:36.520 cpu : usr=89.76%, sys=9.98%, ctx=13, majf=0, minf=218 00:23:36.520 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:36.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:36.520 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:36.520 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:36.520 00:23:36.520 Run status group 0 (all jobs): 00:23:36.520 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10002-10002msec 00:23:36.520 11:03:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:36.520 11:03:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:36.520 11:03:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 00:23:36.521 real 0m11.211s 00:23:36.521 user 0m10.299s 00:23:36.521 sys 0m1.289s 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 ************************************ 00:23:36.521 END TEST fio_dif_1_default 00:23:36.521 ************************************ 00:23:36.521 11:03:51 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:36.521 11:03:51 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:36.521 11:03:51 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 ************************************ 00:23:36.521 START TEST fio_dif_1_multi_subsystems 00:23:36.521 ************************************ 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1121 -- # fio_dif_1_multi_subsystems 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 bdev_null0 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 [2024-05-15 11:03:51.510232] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 bdev_null1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.521 { 00:23:36.521 "params": { 00:23:36.521 "name": "Nvme$subsystem", 00:23:36.521 "trtype": "$TEST_TRANSPORT", 00:23:36.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.521 "adrfam": "ipv4", 00:23:36.521 "trsvcid": "$NVMF_PORT", 00:23:36.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.521 "hdgst": ${hdgst:-false}, 00:23:36.521 "ddgst": ${ddgst:-false} 00:23:36.521 }, 00:23:36.521 "method": "bdev_nvme_attach_controller" 00:23:36.521 } 00:23:36.521 EOF 00:23:36.521 )") 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # shift 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libasan 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:36.521 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:36.521 { 00:23:36.521 "params": { 00:23:36.521 "name": "Nvme$subsystem", 00:23:36.521 "trtype": "$TEST_TRANSPORT", 00:23:36.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:36.521 "adrfam": "ipv4", 00:23:36.522 "trsvcid": "$NVMF_PORT", 00:23:36.522 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:36.522 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:36.522 "hdgst": ${hdgst:-false}, 00:23:36.522 "ddgst": ${ddgst:-false} 00:23:36.522 }, 00:23:36.522 "method": "bdev_nvme_attach_controller" 00:23:36.522 } 00:23:36.522 EOF 00:23:36.522 )") 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:36.522 "params": { 00:23:36.522 "name": "Nvme0", 00:23:36.522 "trtype": "tcp", 00:23:36.522 "traddr": "10.0.0.2", 00:23:36.522 "adrfam": "ipv4", 00:23:36.522 "trsvcid": "4420", 00:23:36.522 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:36.522 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:36.522 "hdgst": false, 00:23:36.522 "ddgst": false 00:23:36.522 }, 00:23:36.522 "method": "bdev_nvme_attach_controller" 00:23:36.522 },{ 00:23:36.522 "params": { 00:23:36.522 "name": "Nvme1", 00:23:36.522 "trtype": "tcp", 00:23:36.522 "traddr": "10.0.0.2", 00:23:36.522 "adrfam": "ipv4", 00:23:36.522 "trsvcid": "4420", 00:23:36.522 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:36.522 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:36.522 "hdgst": false, 00:23:36.522 "ddgst": false 00:23:36.522 }, 00:23:36.522 "method": "bdev_nvme_attach_controller" 00:23:36.522 }' 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:36.522 11:03:51 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:36.522 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:36.522 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:36.522 fio-3.35 00:23:36.522 Starting 2 threads 00:23:36.522 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.519 00:23:46.519 filename0: (groupid=0, jobs=1): err= 0: pid=2907650: Wed May 15 11:04:02 2024 00:23:46.519 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:23:46.519 slat (nsec): min=7032, max=83353, avg=11072.59, stdev=5907.88 00:23:46.519 clat (usec): min=1039, max=43541, avg=21557.49, stdev=20344.89 00:23:46.519 lat (usec): min=1047, max=43573, avg=21568.57, stdev=20343.52 00:23:46.519 clat percentiles (usec): 00:23:46.519 | 1.00th=[ 1057], 5.00th=[ 1074], 10.00th=[ 1090], 20.00th=[ 1106], 00:23:46.519 | 30.00th=[ 1123], 40.00th=[ 1156], 50.00th=[41157], 60.00th=[41681], 00:23:46.519 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:23:46.519 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:23:46.519 | 99.99th=[43779] 00:23:46.519 bw ( KiB/s): min= 672, max= 768, per=57.05%, avg=740.80, stdev=33.28, samples=20 00:23:46.519 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:23:46.519 lat (msec) : 2=49.78%, 50=50.22% 00:23:46.519 cpu : usr=97.55%, sys=2.16%, ctx=13, majf=0, minf=146 00:23:46.519 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.519 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.519 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:46.519 filename1: (groupid=0, jobs=1): err= 0: pid=2907651: Wed May 15 11:04:02 2024 00:23:46.519 read: IOPS=139, BW=557KiB/s (570kB/s)(5584KiB/10029msec) 00:23:46.519 slat (nsec): min=7860, max=59719, avg=13138.25, stdev=6411.49 00:23:46.519 clat (usec): min=1061, max=43594, avg=28693.77, stdev=19027.42 00:23:46.519 lat (usec): min=1070, max=43613, avg=28706.91, stdev=19027.57 00:23:46.519 clat percentiles (usec): 00:23:46.519 | 1.00th=[ 1090], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1221], 00:23:46.519 | 30.00th=[ 1336], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:23:46.519 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:23:46.519 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:23:46.519 | 99.99th=[43779] 00:23:46.519 bw ( KiB/s): min= 352, max= 768, per=42.87%, avg=556.80, stdev=173.23, samples=20 00:23:46.519 iops : min= 88, max= 192, avg=139.20, stdev=43.31, samples=20 00:23:46.519 lat (msec) : 2=32.38%, 50=67.62% 00:23:46.519 cpu : usr=96.89%, sys=2.75%, ctx=14, majf=0, minf=154 00:23:46.519 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:46.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:46.519 issued rwts: total=1396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:46.519 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:46.519 00:23:46.519 Run status group 0 (all jobs): 00:23:46.519 READ: bw=1297KiB/s (1328kB/s), 557KiB/s-741KiB/s (570kB/s-759kB/s), io=12.7MiB (13.3MB), run=10019-10029msec 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.519 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.777 00:23:46.777 real 0m11.289s 00:23:46.777 user 0m20.963s 00:23:46.777 sys 0m0.784s 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # xtrace_disable 00:23:46.777 11:04:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 ************************************ 00:23:46.777 END TEST fio_dif_1_multi_subsystems 00:23:46.777 ************************************ 00:23:46.777 11:04:02 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:23:46.777 11:04:02 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:23:46.777 11:04:02 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:23:46.777 11:04:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 ************************************ 00:23:46.777 START TEST fio_dif_rand_params 00:23:46.777 ************************************ 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1121 -- # fio_dif_rand_params 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 bdev_null0 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:46.777 [2024-05-15 11:04:02.852737] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:46.777 { 00:23:46.777 "params": { 00:23:46.777 "name": "Nvme$subsystem", 00:23:46.777 "trtype": "$TEST_TRANSPORT", 00:23:46.777 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:46.777 "adrfam": "ipv4", 00:23:46.777 "trsvcid": "$NVMF_PORT", 00:23:46.777 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:46.777 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:46.777 "hdgst": ${hdgst:-false}, 00:23:46.777 "ddgst": ${ddgst:-false} 00:23:46.777 }, 00:23:46.777 "method": "bdev_nvme_attach_controller" 00:23:46.777 } 00:23:46.777 EOF 00:23:46.777 )") 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:46.777 "params": { 00:23:46.777 "name": "Nvme0", 00:23:46.777 "trtype": "tcp", 00:23:46.777 "traddr": "10.0.0.2", 00:23:46.777 "adrfam": "ipv4", 00:23:46.777 "trsvcid": "4420", 00:23:46.777 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:46.777 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:46.777 "hdgst": false, 00:23:46.777 "ddgst": false 00:23:46.777 }, 00:23:46.777 "method": "bdev_nvme_attach_controller" 00:23:46.777 }' 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:46.777 11:04:02 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:47.034 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:23:47.034 ... 00:23:47.034 fio-3.35 00:23:47.034 Starting 3 threads 00:23:47.034 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.599 00:23:53.600 filename0: (groupid=0, jobs=1): err= 0: pid=2909056: Wed May 15 11:04:08 2024 00:23:53.600 read: IOPS=119, BW=15.0MiB/s (15.7MB/s)(75.1MiB/5018msec) 00:23:53.600 slat (nsec): min=4886, max=23388, avg=13091.63, stdev=2300.06 00:23:53.600 clat (usec): min=6667, max=99516, avg=25024.13, stdev=20219.82 00:23:53.600 lat (usec): min=6680, max=99530, avg=25037.22, stdev=20219.87 00:23:53.600 clat percentiles (usec): 00:23:53.600 | 1.00th=[ 7111], 5.00th=[ 7635], 10.00th=[ 7832], 20.00th=[10683], 00:23:53.600 | 30.00th=[12387], 40.00th=[13829], 50.00th=[14877], 60.00th=[16188], 00:23:53.600 | 70.00th=[18220], 80.00th=[53216], 90.00th=[55837], 95.00th=[57410], 00:23:53.600 | 99.00th=[61080], 99.50th=[96994], 99.90th=[99091], 99.95th=[99091], 00:23:53.600 | 99.99th=[99091] 00:23:53.600 bw ( KiB/s): min=11264, max=26112, per=22.07%, avg=15308.80, stdev=4347.48, samples=10 00:23:53.600 iops : min= 88, max= 204, avg=119.60, stdev=33.96, samples=10 00:23:53.600 lat (msec) : 10=16.14%, 20=55.41%, 50=1.00%, 100=27.45% 00:23:53.600 cpu : usr=93.74%, sys=5.82%, ctx=11, majf=0, minf=49 00:23:53.600 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.600 issued rwts: total=601,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.600 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:53.600 filename0: (groupid=0, jobs=1): err= 0: pid=2909057: Wed May 15 11:04:08 2024 00:23:53.600 read: IOPS=247, BW=31.0MiB/s (32.5MB/s)(156MiB/5047msec) 00:23:53.600 slat (nsec): min=5216, max=52347, avg=12286.50, stdev=2716.03 00:23:53.600 clat (usec): min=6499, max=53622, avg=12056.10, stdev=10547.85 00:23:53.600 lat (usec): min=6512, max=53636, avg=12068.39, stdev=10547.91 00:23:53.600 clat percentiles (usec): 00:23:53.600 | 1.00th=[ 7111], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 7767], 00:23:53.600 | 30.00th=[ 8225], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:23:53.600 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12256], 95.00th=[50594], 00:23:53.600 | 99.00th=[52167], 99.50th=[52691], 99.90th=[53216], 99.95th=[53740], 00:23:53.600 | 99.99th=[53740] 00:23:53.600 bw ( KiB/s): min=20736, max=38400, per=46.06%, avg=31948.80, stdev=5445.04, samples=10 00:23:53.600 iops : min= 162, max= 300, avg=249.60, stdev=42.54, samples=10 00:23:53.600 lat (msec) : 10=67.36%, 20=26.00%, 50=0.72%, 100=5.92% 00:23:53.600 cpu : usr=92.03%, sys=6.90%, ctx=13, majf=0, minf=114 00:23:53.600 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.600 issued rwts: total=1250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.600 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:53.600 filename0: (groupid=0, jobs=1): err= 0: pid=2909058: Wed May 15 11:04:08 2024 00:23:53.600 read: IOPS=176, BW=22.1MiB/s (23.2MB/s)(111MiB/5003msec) 00:23:53.600 slat (nsec): min=5091, max=26463, avg=12596.86, stdev=2683.04 00:23:53.600 clat (usec): min=6541, max=94125, avg=16958.89, stdev=16192.30 00:23:53.600 lat (usec): min=6553, max=94139, avg=16971.49, stdev=16192.40 00:23:53.600 clat percentiles (usec): 00:23:53.600 | 1.00th=[ 6718], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8848], 00:23:53.600 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10683], 60.00th=[11469], 00:23:53.600 | 70.00th=[12518], 80.00th=[13698], 90.00th=[51643], 95.00th=[53216], 00:23:53.600 | 99.00th=[63701], 99.50th=[92799], 99.90th=[93848], 99.95th=[93848], 00:23:53.600 | 99.99th=[93848] 00:23:53.600 bw ( KiB/s): min=13312, max=28416, per=32.52%, avg=22558.20, stdev=4256.08, samples=10 00:23:53.600 iops : min= 104, max= 222, avg=176.20, stdev=33.25, samples=10 00:23:53.600 lat (msec) : 10=39.37%, 20=45.70%, 50=1.13%, 100=13.80% 00:23:53.600 cpu : usr=92.10%, sys=7.16%, ctx=149, majf=0, minf=98 00:23:53.600 IO depths : 1=3.1%, 2=96.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:53.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:53.600 issued rwts: total=884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:53.600 latency : target=0, window=0, percentile=100.00%, depth=3 00:23:53.600 00:23:53.600 Run status group 0 (all jobs): 00:23:53.600 READ: bw=67.7MiB/s (71.0MB/s), 15.0MiB/s-31.0MiB/s (15.7MB/s-32.5MB/s), io=342MiB (358MB), run=5003-5047msec 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:23:53.600 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 bdev_null0 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 [2024-05-15 11:04:08.944354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 bdev_null1 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 bdev_null2 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:23:53.601 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.601 { 00:23:53.601 "params": { 00:23:53.601 "name": "Nvme$subsystem", 00:23:53.601 "trtype": "$TEST_TRANSPORT", 00:23:53.601 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.601 "adrfam": "ipv4", 00:23:53.601 "trsvcid": "$NVMF_PORT", 00:23:53.601 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.601 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.601 "hdgst": ${hdgst:-false}, 00:23:53.601 "ddgst": ${ddgst:-false} 00:23:53.602 }, 00:23:53.602 "method": "bdev_nvme_attach_controller" 00:23:53.602 } 00:23:53.602 EOF 00:23:53.602 )") 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.602 { 00:23:53.602 "params": { 00:23:53.602 "name": "Nvme$subsystem", 00:23:53.602 "trtype": "$TEST_TRANSPORT", 00:23:53.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.602 "adrfam": "ipv4", 00:23:53.602 "trsvcid": "$NVMF_PORT", 00:23:53.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.602 "hdgst": ${hdgst:-false}, 00:23:53.602 "ddgst": ${ddgst:-false} 00:23:53.602 }, 00:23:53.602 "method": "bdev_nvme_attach_controller" 00:23:53.602 } 00:23:53.602 EOF 00:23:53.602 )") 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.602 { 00:23:53.602 "params": { 00:23:53.602 "name": "Nvme$subsystem", 00:23:53.602 "trtype": "$TEST_TRANSPORT", 00:23:53.602 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.602 "adrfam": "ipv4", 00:23:53.602 "trsvcid": "$NVMF_PORT", 00:23:53.602 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.602 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.602 "hdgst": ${hdgst:-false}, 00:23:53.602 "ddgst": ${ddgst:-false} 00:23:53.602 }, 00:23:53.602 "method": "bdev_nvme_attach_controller" 00:23:53.602 } 00:23:53.602 EOF 00:23:53.602 )") 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:53.602 "params": { 00:23:53.602 "name": "Nvme0", 00:23:53.602 "trtype": "tcp", 00:23:53.602 "traddr": "10.0.0.2", 00:23:53.602 "adrfam": "ipv4", 00:23:53.602 "trsvcid": "4420", 00:23:53.602 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:53.602 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:53.602 "hdgst": false, 00:23:53.602 "ddgst": false 00:23:53.602 }, 00:23:53.602 "method": "bdev_nvme_attach_controller" 00:23:53.602 },{ 00:23:53.602 "params": { 00:23:53.602 "name": "Nvme1", 00:23:53.602 "trtype": "tcp", 00:23:53.602 "traddr": "10.0.0.2", 00:23:53.602 "adrfam": "ipv4", 00:23:53.602 "trsvcid": "4420", 00:23:53.602 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.602 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.602 "hdgst": false, 00:23:53.602 "ddgst": false 00:23:53.602 }, 00:23:53.602 "method": "bdev_nvme_attach_controller" 00:23:53.602 },{ 00:23:53.602 "params": { 00:23:53.602 "name": "Nvme2", 00:23:53.602 "trtype": "tcp", 00:23:53.602 "traddr": "10.0.0.2", 00:23:53.602 "adrfam": "ipv4", 00:23:53.602 "trsvcid": "4420", 00:23:53.602 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:53.602 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.602 "hdgst": false, 00:23:53.602 "ddgst": false 00:23:53.602 }, 00:23:53.602 "method": "bdev_nvme_attach_controller" 00:23:53.602 }' 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:23:53.602 11:04:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:53.602 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:53.602 ... 00:23:53.602 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:53.602 ... 00:23:53.602 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:23:53.602 ... 00:23:53.602 fio-3.35 00:23:53.602 Starting 24 threads 00:23:53.602 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.807 00:24:05.807 filename0: (groupid=0, jobs=1): err= 0: pid=2909805: Wed May 15 11:04:20 2024 00:24:05.807 read: IOPS=460, BW=1841KiB/s (1885kB/s)(18.0MiB/10012msec) 00:24:05.807 slat (usec): min=7, max=138, avg=29.69, stdev=13.76 00:24:05.807 clat (usec): min=6125, max=60699, avg=34494.07, stdev=4227.92 00:24:05.807 lat (usec): min=6140, max=60709, avg=34523.76, stdev=4228.98 00:24:05.807 clat percentiles (usec): 00:24:05.807 | 1.00th=[14091], 5.00th=[32375], 10.00th=[33424], 20.00th=[33817], 00:24:05.807 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.807 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[38011], 00:24:05.807 | 99.00th=[51119], 99.50th=[58459], 99.90th=[60556], 99.95th=[60556], 00:24:05.807 | 99.99th=[60556] 00:24:05.807 bw ( KiB/s): min= 1664, max= 1920, per=4.22%, avg=1842.20, stdev=77.36, samples=20 00:24:05.807 iops : min= 416, max= 480, avg=460.55, stdev=19.34, samples=20 00:24:05.807 lat (msec) : 10=0.52%, 20=1.50%, 50=96.96%, 100=1.02% 00:24:05.807 cpu : usr=97.95%, sys=1.64%, ctx=24, majf=0, minf=53 00:24:05.807 IO depths : 1=5.3%, 2=11.1%, 4=24.2%, 8=52.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:24:05.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.807 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.807 issued rwts: total=4608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.807 filename0: (groupid=0, jobs=1): err= 0: pid=2909806: Wed May 15 11:04:20 2024 00:24:05.807 read: IOPS=450, BW=1803KiB/s (1846kB/s)(17.6MiB/10010msec) 00:24:05.807 slat (usec): min=8, max=163, avg=34.16, stdev=16.14 00:24:05.807 clat (usec): min=17965, max=79116, avg=35174.45, stdev=3548.57 00:24:05.807 lat (usec): min=18007, max=79143, avg=35208.61, stdev=3546.81 00:24:05.807 clat percentiles (usec): 00:24:05.807 | 1.00th=[27657], 5.00th=[33162], 10.00th=[33817], 20.00th=[34341], 00:24:05.807 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.807 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36963], 95.00th=[39060], 00:24:05.807 | 99.00th=[49546], 99.50th=[51119], 99.90th=[67634], 99.95th=[79168], 00:24:05.807 | 99.99th=[79168] 00:24:05.807 bw ( KiB/s): min= 1539, max= 1920, per=4.12%, avg=1798.89, stdev=89.76, samples=19 00:24:05.807 iops : min= 384, max= 480, avg=449.68, stdev=22.56, samples=19 00:24:05.807 lat (msec) : 20=0.29%, 50=98.91%, 100=0.80% 00:24:05.807 cpu : usr=86.59%, sys=5.35%, ctx=282, majf=0, minf=51 00:24:05.807 IO depths : 1=5.8%, 2=11.9%, 4=24.6%, 8=51.1%, 16=6.8%, 32=0.0%, >=64=0.0% 00:24:05.807 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.807 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.807 issued rwts: total=4512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.807 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.807 filename0: (groupid=0, jobs=1): err= 0: pid=2909807: Wed May 15 11:04:20 2024 00:24:05.807 read: IOPS=462, BW=1849KiB/s (1893kB/s)(18.1MiB/10031msec) 00:24:05.807 slat (usec): min=8, max=169, avg=36.47, stdev=24.30 00:24:05.807 clat (usec): min=11032, max=62160, avg=34304.60, stdev=5801.42 00:24:05.807 lat (usec): min=11042, max=62183, avg=34341.08, stdev=5805.36 00:24:05.807 clat percentiles (usec): 00:24:05.807 | 1.00th=[14877], 5.00th=[21365], 10.00th=[30016], 20.00th=[33817], 00:24:05.807 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.807 | 70.00th=[34866], 80.00th=[35390], 90.00th=[37487], 95.00th=[43254], 00:24:05.808 | 99.00th=[53216], 99.50th=[55837], 99.90th=[62129], 99.95th=[62129], 00:24:05.808 | 99.99th=[62129] 00:24:05.808 bw ( KiB/s): min= 1760, max= 2144, per=4.24%, avg=1850.40, stdev=88.60, samples=20 00:24:05.808 iops : min= 440, max= 536, avg=462.60, stdev=22.15, samples=20 00:24:05.808 lat (msec) : 20=2.59%, 50=95.30%, 100=2.11% 00:24:05.808 cpu : usr=97.80%, sys=1.72%, ctx=19, majf=0, minf=64 00:24:05.808 IO depths : 1=3.3%, 2=6.9%, 4=19.9%, 8=60.2%, 16=9.7%, 32=0.0%, >=64=0.0% 00:24:05.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 complete : 0=0.0%, 4=93.5%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 issued rwts: total=4636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.808 filename0: (groupid=0, jobs=1): err= 0: pid=2909808: Wed May 15 11:04:20 2024 00:24:05.808 read: IOPS=447, BW=1789KiB/s (1832kB/s)(17.5MiB/10007msec) 00:24:05.808 slat (usec): min=8, max=138, avg=27.78, stdev=16.92 00:24:05.808 clat (usec): min=10447, max=64656, avg=35600.95, stdev=4851.23 00:24:05.808 lat (usec): min=10481, max=64671, avg=35628.73, stdev=4850.82 00:24:05.808 clat percentiles (usec): 00:24:05.808 | 1.00th=[19792], 5.00th=[33162], 10.00th=[33817], 20.00th=[34341], 00:24:05.808 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.808 | 70.00th=[35390], 80.00th=[35914], 90.00th=[38536], 95.00th=[46400], 00:24:05.808 | 99.00th=[54789], 99.50th=[57410], 99.90th=[64750], 99.95th=[64750], 00:24:05.808 | 99.99th=[64750] 00:24:05.808 bw ( KiB/s): min= 1536, max= 1904, per=4.09%, avg=1785.26, stdev=97.95, samples=19 00:24:05.808 iops : min= 384, max= 476, avg=446.32, stdev=24.49, samples=19 00:24:05.808 lat (msec) : 20=1.03%, 50=95.87%, 100=3.11% 00:24:05.808 cpu : usr=97.69%, sys=1.77%, ctx=49, majf=0, minf=42 00:24:05.808 IO depths : 1=1.1%, 2=3.1%, 4=11.9%, 8=69.9%, 16=14.1%, 32=0.0%, >=64=0.0% 00:24:05.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 complete : 0=0.0%, 4=91.7%, 8=5.1%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 issued rwts: total=4476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.808 filename0: (groupid=0, jobs=1): err= 0: pid=2909809: Wed May 15 11:04:20 2024 00:24:05.808 read: IOPS=445, BW=1783KiB/s (1826kB/s)(17.4MiB/10006msec) 00:24:05.808 slat (usec): min=8, max=159, avg=36.13, stdev=22.41 00:24:05.808 clat (usec): min=10271, max=65333, avg=35713.16, stdev=4627.35 00:24:05.808 lat (usec): min=10297, max=65352, avg=35749.29, stdev=4625.46 00:24:05.808 clat percentiles (usec): 00:24:05.808 | 1.00th=[23200], 5.00th=[32900], 10.00th=[33817], 20.00th=[34341], 00:24:05.808 | 30.00th=[34341], 40.00th=[34866], 50.00th=[34866], 60.00th=[34866], 00:24:05.808 | 70.00th=[35390], 80.00th=[35914], 90.00th=[39060], 95.00th=[45351], 00:24:05.808 | 99.00th=[53740], 99.50th=[55313], 99.90th=[65274], 99.95th=[65274], 00:24:05.808 | 99.99th=[65274] 00:24:05.808 bw ( KiB/s): min= 1523, max= 1920, per=4.07%, avg=1774.89, stdev=92.08, samples=19 00:24:05.808 iops : min= 380, max= 480, avg=443.68, stdev=23.13, samples=19 00:24:05.808 lat (msec) : 20=0.47%, 50=97.51%, 100=2.02% 00:24:05.808 cpu : usr=97.83%, sys=1.68%, ctx=28, majf=0, minf=105 00:24:05.808 IO depths : 1=0.1%, 2=0.4%, 4=7.9%, 8=75.7%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:05.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 complete : 0=0.0%, 4=91.3%, 8=6.3%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 issued rwts: total=4461,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.808 filename0: (groupid=0, jobs=1): err= 0: pid=2909810: Wed May 15 11:04:20 2024 00:24:05.808 read: IOPS=448, BW=1793KiB/s (1836kB/s)(17.5MiB/10006msec) 00:24:05.808 slat (usec): min=8, max=168, avg=31.97, stdev=16.47 00:24:05.808 clat (usec): min=8804, max=97908, avg=35524.65, stdev=5200.67 00:24:05.808 lat (usec): min=8834, max=97940, avg=35556.62, stdev=5199.70 00:24:05.808 clat percentiles (usec): 00:24:05.808 | 1.00th=[21103], 5.00th=[32900], 10.00th=[33424], 20.00th=[34341], 00:24:05.808 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.808 | 70.00th=[35390], 80.00th=[35914], 90.00th=[38536], 95.00th=[43254], 00:24:05.808 | 99.00th=[55837], 99.50th=[62129], 99.90th=[78119], 99.95th=[96994], 00:24:05.808 | 99.99th=[98042] 00:24:05.808 bw ( KiB/s): min= 1410, max= 1920, per=4.10%, avg=1787.05, stdev=109.96, samples=19 00:24:05.808 iops : min= 352, max= 480, avg=446.74, stdev=27.59, samples=19 00:24:05.808 lat (msec) : 10=0.07%, 20=0.65%, 50=97.21%, 100=2.07% 00:24:05.808 cpu : usr=92.07%, sys=3.94%, ctx=198, majf=0, minf=51 00:24:05.808 IO depths : 1=0.1%, 2=0.8%, 4=12.6%, 8=71.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:24:05.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 complete : 0=0.0%, 4=92.3%, 8=4.0%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 issued rwts: total=4484,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.808 filename0: (groupid=0, jobs=1): err= 0: pid=2909811: Wed May 15 11:04:20 2024 00:24:05.808 read: IOPS=453, BW=1813KiB/s (1856kB/s)(17.7MiB/10018msec) 00:24:05.808 slat (usec): min=14, max=165, avg=58.59, stdev=25.64 00:24:05.808 clat (usec): min=14691, max=71185, avg=34826.54, stdev=4194.83 00:24:05.808 lat (usec): min=14769, max=71209, avg=34885.13, stdev=4190.87 00:24:05.808 clat percentiles (usec): 00:24:05.808 | 1.00th=[21627], 5.00th=[32113], 10.00th=[33162], 20.00th=[33817], 00:24:05.808 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.808 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36963], 95.00th=[41157], 00:24:05.808 | 99.00th=[52691], 99.50th=[57410], 99.90th=[60556], 99.95th=[70779], 00:24:05.808 | 99.99th=[70779] 00:24:05.808 bw ( KiB/s): min= 1664, max= 1920, per=4.15%, avg=1809.40, stdev=60.53, samples=20 00:24:05.808 iops : min= 416, max= 480, avg=452.35, stdev=15.13, samples=20 00:24:05.808 lat (msec) : 20=0.37%, 50=98.22%, 100=1.41% 00:24:05.808 cpu : usr=97.93%, sys=1.60%, ctx=17, majf=0, minf=48 00:24:05.808 IO depths : 1=3.7%, 2=8.7%, 4=22.8%, 8=56.0%, 16=8.9%, 32=0.0%, >=64=0.0% 00:24:05.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 complete : 0=0.0%, 4=93.8%, 8=0.5%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 issued rwts: total=4540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.808 filename0: (groupid=0, jobs=1): err= 0: pid=2909812: Wed May 15 11:04:20 2024 00:24:05.808 read: IOPS=457, BW=1830KiB/s (1874kB/s)(17.9MiB/10016msec) 00:24:05.808 slat (usec): min=8, max=261, avg=29.81, stdev=13.00 00:24:05.808 clat (usec): min=8713, max=53840, avg=34735.99, stdev=3487.73 00:24:05.808 lat (usec): min=8737, max=53861, avg=34765.80, stdev=3488.54 00:24:05.808 clat percentiles (usec): 00:24:05.808 | 1.00th=[20579], 5.00th=[32637], 10.00th=[33817], 20.00th=[33817], 00:24:05.808 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.808 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[38536], 00:24:05.808 | 99.00th=[47973], 99.50th=[49021], 99.90th=[53740], 99.95th=[53740], 00:24:05.808 | 99.99th=[53740] 00:24:05.808 bw ( KiB/s): min= 1648, max= 1920, per=4.18%, avg=1826.20, stdev=73.43, samples=20 00:24:05.808 iops : min= 412, max= 480, avg=456.55, stdev=18.36, samples=20 00:24:05.808 lat (msec) : 10=0.22%, 20=0.57%, 50=98.91%, 100=0.31% 00:24:05.808 cpu : usr=97.41%, sys=1.82%, ctx=49, majf=0, minf=58 00:24:05.808 IO depths : 1=3.1%, 2=6.7%, 4=21.3%, 8=59.3%, 16=9.6%, 32=0.0%, >=64=0.0% 00:24:05.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 issued rwts: total=4582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.808 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.808 filename1: (groupid=0, jobs=1): err= 0: pid=2909813: Wed May 15 11:04:20 2024 00:24:05.808 read: IOPS=455, BW=1821KiB/s (1865kB/s)(17.8MiB/10027msec) 00:24:05.808 slat (usec): min=8, max=131, avg=31.16, stdev=19.05 00:24:05.808 clat (usec): min=14593, max=75157, avg=34883.17, stdev=3984.43 00:24:05.808 lat (usec): min=14602, max=75191, avg=34914.33, stdev=3984.52 00:24:05.808 clat percentiles (usec): 00:24:05.808 | 1.00th=[20055], 5.00th=[32375], 10.00th=[33162], 20.00th=[33817], 00:24:05.808 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.808 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[39060], 00:24:05.808 | 99.00th=[49021], 99.50th=[50594], 99.90th=[74974], 99.95th=[74974], 00:24:05.808 | 99.99th=[74974] 00:24:05.808 bw ( KiB/s): min= 1536, max= 1920, per=4.17%, avg=1819.80, stdev=89.10, samples=20 00:24:05.808 iops : min= 384, max= 480, avg=454.95, stdev=22.27, samples=20 00:24:05.808 lat (msec) : 20=1.03%, 50=98.23%, 100=0.74% 00:24:05.808 cpu : usr=97.47%, sys=1.77%, ctx=57, majf=0, minf=67 00:24:05.808 IO depths : 1=4.6%, 2=9.7%, 4=23.3%, 8=54.4%, 16=8.0%, 32=0.0%, >=64=0.0% 00:24:05.808 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.808 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 issued rwts: total=4566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.809 filename1: (groupid=0, jobs=1): err= 0: pid=2909814: Wed May 15 11:04:20 2024 00:24:05.809 read: IOPS=455, BW=1821KiB/s (1865kB/s)(17.8MiB/10017msec) 00:24:05.809 slat (usec): min=8, max=1107, avg=33.79, stdev=23.45 00:24:05.809 clat (usec): min=14413, max=63217, avg=34874.59, stdev=2714.84 00:24:05.809 lat (usec): min=14608, max=63264, avg=34908.38, stdev=2715.12 00:24:05.809 clat percentiles (usec): 00:24:05.809 | 1.00th=[28443], 5.00th=[32900], 10.00th=[33424], 20.00th=[33817], 00:24:05.809 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.809 | 70.00th=[34866], 80.00th=[35390], 90.00th=[35914], 95.00th=[38011], 00:24:05.809 | 99.00th=[43779], 99.50th=[49546], 99.90th=[63177], 99.95th=[63177], 00:24:05.809 | 99.99th=[63177] 00:24:05.809 bw ( KiB/s): min= 1536, max= 1920, per=4.16%, avg=1817.40, stdev=86.99, samples=20 00:24:05.809 iops : min= 384, max= 480, avg=454.35, stdev=21.75, samples=20 00:24:05.809 lat (msec) : 20=0.18%, 50=99.34%, 100=0.48% 00:24:05.809 cpu : usr=95.65%, sys=2.55%, ctx=88, majf=0, minf=47 00:24:05.809 IO depths : 1=3.2%, 2=9.2%, 4=24.5%, 8=53.8%, 16=9.3%, 32=0.0%, >=64=0.0% 00:24:05.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.809 filename1: (groupid=0, jobs=1): err= 0: pid=2909815: Wed May 15 11:04:20 2024 00:24:05.809 read: IOPS=455, BW=1823KiB/s (1866kB/s)(17.8MiB/10007msec) 00:24:05.809 slat (usec): min=8, max=109, avg=36.60, stdev=14.45 00:24:05.809 clat (usec): min=10028, max=78299, avg=34761.40, stdev=2965.95 00:24:05.809 lat (usec): min=10048, max=78332, avg=34798.00, stdev=2965.58 00:24:05.809 clat percentiles (usec): 00:24:05.809 | 1.00th=[28181], 5.00th=[33162], 10.00th=[33817], 20.00th=[33817], 00:24:05.809 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.809 | 70.00th=[34866], 80.00th=[35390], 90.00th=[35914], 95.00th=[36963], 00:24:05.809 | 99.00th=[42206], 99.50th=[50070], 99.90th=[65799], 99.95th=[78119], 00:24:05.809 | 99.99th=[78119] 00:24:05.809 bw ( KiB/s): min= 1536, max= 1920, per=4.17%, avg=1818.95, stdev=100.78, samples=19 00:24:05.809 iops : min= 384, max= 480, avg=454.74, stdev=25.19, samples=19 00:24:05.809 lat (msec) : 20=0.35%, 50=99.12%, 100=0.53% 00:24:05.809 cpu : usr=97.69%, sys=1.71%, ctx=59, majf=0, minf=42 00:24:05.809 IO depths : 1=6.1%, 2=12.2%, 4=24.9%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:24:05.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.809 filename1: (groupid=0, jobs=1): err= 0: pid=2909816: Wed May 15 11:04:20 2024 00:24:05.809 read: IOPS=451, BW=1808KiB/s (1851kB/s)(17.7MiB/10010msec) 00:24:05.809 slat (nsec): min=8048, max=93981, avg=31658.66, stdev=14551.43 00:24:05.809 clat (usec): min=19189, max=79376, avg=35190.51, stdev=3813.94 00:24:05.809 lat (usec): min=19199, max=79397, avg=35222.17, stdev=3812.87 00:24:05.809 clat percentiles (usec): 00:24:05.809 | 1.00th=[23462], 5.00th=[32900], 10.00th=[33817], 20.00th=[34341], 00:24:05.809 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.809 | 70.00th=[35390], 80.00th=[35390], 90.00th=[36963], 95.00th=[38536], 00:24:05.809 | 99.00th=[51119], 99.50th=[62653], 99.90th=[68682], 99.95th=[68682], 00:24:05.809 | 99.99th=[79168] 00:24:05.809 bw ( KiB/s): min= 1536, max= 1920, per=4.13%, avg=1803.79, stdev=76.70, samples=19 00:24:05.809 iops : min= 384, max= 480, avg=450.95, stdev=19.18, samples=19 00:24:05.809 lat (msec) : 20=0.13%, 50=98.67%, 100=1.19% 00:24:05.809 cpu : usr=97.84%, sys=1.70%, ctx=27, majf=0, minf=60 00:24:05.809 IO depths : 1=1.4%, 2=3.4%, 4=19.3%, 8=64.5%, 16=11.3%, 32=0.0%, >=64=0.0% 00:24:05.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 complete : 0=0.0%, 4=93.8%, 8=0.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 issued rwts: total=4524,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.809 filename1: (groupid=0, jobs=1): err= 0: pid=2909817: Wed May 15 11:04:20 2024 00:24:05.809 read: IOPS=451, BW=1806KiB/s (1850kB/s)(17.6MiB/10002msec) 00:24:05.809 slat (usec): min=8, max=126, avg=42.20, stdev=23.06 00:24:05.809 clat (usec): min=11438, max=70966, avg=35113.53, stdev=5179.91 00:24:05.809 lat (usec): min=11510, max=70997, avg=35155.73, stdev=5180.67 00:24:05.809 clat percentiles (usec): 00:24:05.809 | 1.00th=[20317], 5.00th=[29230], 10.00th=[33162], 20.00th=[33817], 00:24:05.809 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.809 | 70.00th=[35390], 80.00th=[35390], 90.00th=[37487], 95.00th=[42206], 00:24:05.809 | 99.00th=[58459], 99.50th=[64226], 99.90th=[70779], 99.95th=[70779], 00:24:05.809 | 99.99th=[70779] 00:24:05.809 bw ( KiB/s): min= 1520, max= 1920, per=4.14%, avg=1807.58, stdev=97.85, samples=19 00:24:05.809 iops : min= 380, max= 480, avg=451.89, stdev=24.46, samples=19 00:24:05.809 lat (msec) : 20=0.97%, 50=96.57%, 100=2.46% 00:24:05.809 cpu : usr=95.53%, sys=2.53%, ctx=73, majf=0, minf=59 00:24:05.809 IO depths : 1=2.1%, 2=6.2%, 4=20.0%, 8=60.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:24:05.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 complete : 0=0.0%, 4=93.4%, 8=1.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 issued rwts: total=4517,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.809 filename1: (groupid=0, jobs=1): err= 0: pid=2909818: Wed May 15 11:04:20 2024 00:24:05.809 read: IOPS=456, BW=1827KiB/s (1870kB/s)(17.8MiB/10006msec) 00:24:05.809 slat (usec): min=8, max=126, avg=38.47, stdev=20.48 00:24:05.809 clat (usec): min=12792, max=60880, avg=34732.64, stdev=4227.47 00:24:05.809 lat (usec): min=12849, max=60899, avg=34771.11, stdev=4229.31 00:24:05.809 clat percentiles (usec): 00:24:05.809 | 1.00th=[17695], 5.00th=[31327], 10.00th=[33162], 20.00th=[33817], 00:24:05.809 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.809 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36963], 95.00th=[39060], 00:24:05.809 | 99.00th=[52691], 99.50th=[55837], 99.90th=[58983], 99.95th=[59507], 00:24:05.809 | 99.99th=[61080] 00:24:05.809 bw ( KiB/s): min= 1664, max= 1968, per=4.19%, avg=1829.42, stdev=73.90, samples=19 00:24:05.809 iops : min= 416, max= 492, avg=457.32, stdev=18.51, samples=19 00:24:05.809 lat (msec) : 20=1.40%, 50=97.15%, 100=1.44% 00:24:05.809 cpu : usr=96.21%, sys=2.33%, ctx=55, majf=0, minf=49 00:24:05.809 IO depths : 1=4.7%, 2=9.7%, 4=22.4%, 8=54.9%, 16=8.3%, 32=0.0%, >=64=0.0% 00:24:05.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 issued rwts: total=4569,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.809 filename1: (groupid=0, jobs=1): err= 0: pid=2909819: Wed May 15 11:04:20 2024 00:24:05.809 read: IOPS=466, BW=1867KiB/s (1912kB/s)(18.2MiB/10009msec) 00:24:05.809 slat (usec): min=8, max=128, avg=30.50, stdev=15.97 00:24:05.809 clat (usec): min=12521, max=54543, avg=34042.79, stdev=4441.03 00:24:05.809 lat (usec): min=12530, max=54575, avg=34073.29, stdev=4443.89 00:24:05.809 clat percentiles (usec): 00:24:05.809 | 1.00th=[15401], 5.00th=[24249], 10.00th=[31851], 20.00th=[33817], 00:24:05.809 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.809 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[38536], 00:24:05.809 | 99.00th=[47449], 99.50th=[52691], 99.90th=[54264], 99.95th=[54264], 00:24:05.809 | 99.99th=[54789] 00:24:05.809 bw ( KiB/s): min= 1712, max= 2104, per=4.27%, avg=1862.00, stdev=92.38, samples=20 00:24:05.809 iops : min= 428, max= 526, avg=465.50, stdev=23.10, samples=20 00:24:05.809 lat (msec) : 20=2.29%, 50=97.11%, 100=0.60% 00:24:05.809 cpu : usr=97.72%, sys=1.79%, ctx=16, majf=0, minf=58 00:24:05.809 IO depths : 1=4.2%, 2=8.5%, 4=20.4%, 8=58.4%, 16=8.6%, 32=0.0%, >=64=0.0% 00:24:05.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 complete : 0=0.0%, 4=93.2%, 8=1.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.809 issued rwts: total=4671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.809 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.809 filename1: (groupid=0, jobs=1): err= 0: pid=2909820: Wed May 15 11:04:20 2024 00:24:05.809 read: IOPS=443, BW=1774KiB/s (1816kB/s)(17.4MiB/10017msec) 00:24:05.809 slat (usec): min=8, max=211, avg=53.05, stdev=25.06 00:24:05.809 clat (usec): min=10766, max=63268, avg=35680.98, stdev=5983.08 00:24:05.809 lat (usec): min=10794, max=63285, avg=35734.03, stdev=5982.57 00:24:05.809 clat percentiles (usec): 00:24:05.809 | 1.00th=[18220], 5.00th=[30016], 10.00th=[32900], 20.00th=[33817], 00:24:05.809 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.809 | 70.00th=[35390], 80.00th=[36439], 90.00th=[41681], 95.00th=[49546], 00:24:05.810 | 99.00th=[58459], 99.50th=[61080], 99.90th=[63177], 99.95th=[63177], 00:24:05.810 | 99.99th=[63177] 00:24:05.810 bw ( KiB/s): min= 1536, max= 1920, per=4.06%, avg=1770.20, stdev=97.60, samples=20 00:24:05.810 iops : min= 384, max= 480, avg=442.55, stdev=24.40, samples=20 00:24:05.810 lat (msec) : 20=1.35%, 50=94.28%, 100=4.37% 00:24:05.810 cpu : usr=97.99%, sys=1.53%, ctx=33, majf=0, minf=76 00:24:05.810 IO depths : 1=3.0%, 2=7.1%, 4=19.2%, 8=60.7%, 16=10.0%, 32=0.0%, >=64=0.0% 00:24:05.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 issued rwts: total=4442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.810 filename2: (groupid=0, jobs=1): err= 0: pid=2909821: Wed May 15 11:04:20 2024 00:24:05.810 read: IOPS=454, BW=1817KiB/s (1861kB/s)(17.8MiB/10016msec) 00:24:05.810 slat (usec): min=8, max=132, avg=39.08, stdev=17.11 00:24:05.810 clat (usec): min=22521, max=63061, avg=34884.17, stdev=2384.84 00:24:05.810 lat (usec): min=22552, max=63086, avg=34923.26, stdev=2384.28 00:24:05.810 clat percentiles (usec): 00:24:05.810 | 1.00th=[30278], 5.00th=[33162], 10.00th=[33424], 20.00th=[33817], 00:24:05.810 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.810 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[38011], 00:24:05.810 | 99.00th=[43779], 99.50th=[49546], 99.90th=[59507], 99.95th=[62653], 00:24:05.810 | 99.99th=[63177] 00:24:05.810 bw ( KiB/s): min= 1648, max= 1920, per=4.15%, avg=1813.40, stdev=64.76, samples=20 00:24:05.810 iops : min= 412, max= 480, avg=453.35, stdev=16.19, samples=20 00:24:05.810 lat (msec) : 50=99.54%, 100=0.46% 00:24:05.810 cpu : usr=97.17%, sys=2.10%, ctx=102, majf=0, minf=55 00:24:05.810 IO depths : 1=5.3%, 2=10.9%, 4=24.2%, 8=52.4%, 16=7.3%, 32=0.0%, >=64=0.0% 00:24:05.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 issued rwts: total=4550,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.810 filename2: (groupid=0, jobs=1): err= 0: pid=2909822: Wed May 15 11:04:20 2024 00:24:05.810 read: IOPS=465, BW=1861KiB/s (1906kB/s)(18.2MiB/10021msec) 00:24:05.810 slat (usec): min=5, max=610, avg=28.11, stdev=21.03 00:24:05.810 clat (usec): min=10587, max=64849, avg=34162.12, stdev=5156.18 00:24:05.810 lat (usec): min=10607, max=64859, avg=34190.23, stdev=5157.63 00:24:05.810 clat percentiles (usec): 00:24:05.810 | 1.00th=[13960], 5.00th=[23987], 10.00th=[32375], 20.00th=[33817], 00:24:05.810 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.810 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[38011], 00:24:05.810 | 99.00th=[53216], 99.50th=[57410], 99.90th=[64750], 99.95th=[64750], 00:24:05.810 | 99.99th=[64750] 00:24:05.810 bw ( KiB/s): min= 1744, max= 2128, per=4.26%, avg=1858.60, stdev=85.75, samples=20 00:24:05.810 iops : min= 436, max= 532, avg=464.65, stdev=21.44, samples=20 00:24:05.810 lat (msec) : 20=3.24%, 50=95.26%, 100=1.50% 00:24:05.810 cpu : usr=92.99%, sys=3.53%, ctx=183, majf=0, minf=96 00:24:05.810 IO depths : 1=2.3%, 2=6.7%, 4=21.3%, 8=59.4%, 16=10.3%, 32=0.0%, >=64=0.0% 00:24:05.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 complete : 0=0.0%, 4=93.4%, 8=1.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 issued rwts: total=4663,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.810 filename2: (groupid=0, jobs=1): err= 0: pid=2909823: Wed May 15 11:04:20 2024 00:24:05.810 read: IOPS=452, BW=1810KiB/s (1853kB/s)(17.7MiB/10007msec) 00:24:05.810 slat (usec): min=8, max=120, avg=35.25, stdev=15.66 00:24:05.810 clat (usec): min=19948, max=76136, avg=35099.83, stdev=3641.56 00:24:05.810 lat (usec): min=19958, max=76169, avg=35135.07, stdev=3641.38 00:24:05.810 clat percentiles (usec): 00:24:05.810 | 1.00th=[26608], 5.00th=[33162], 10.00th=[33817], 20.00th=[34341], 00:24:05.810 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.810 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[38011], 00:24:05.810 | 99.00th=[51119], 99.50th=[55313], 99.90th=[76022], 99.95th=[76022], 00:24:05.810 | 99.99th=[76022] 00:24:05.810 bw ( KiB/s): min= 1536, max= 1920, per=4.14%, avg=1805.47, stdev=83.18, samples=19 00:24:05.810 iops : min= 384, max= 480, avg=451.37, stdev=20.80, samples=19 00:24:05.810 lat (msec) : 20=0.04%, 50=98.94%, 100=1.02% 00:24:05.810 cpu : usr=97.78%, sys=1.77%, ctx=19, majf=0, minf=48 00:24:05.810 IO depths : 1=2.4%, 2=6.5%, 4=18.7%, 8=62.3%, 16=10.1%, 32=0.0%, >=64=0.0% 00:24:05.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 complete : 0=0.0%, 4=92.5%, 8=1.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 issued rwts: total=4528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.810 filename2: (groupid=0, jobs=1): err= 0: pid=2909824: Wed May 15 11:04:20 2024 00:24:05.810 read: IOPS=463, BW=1854KiB/s (1898kB/s)(18.1MiB/10015msec) 00:24:05.810 slat (usec): min=8, max=128, avg=25.04, stdev=15.25 00:24:05.810 clat (usec): min=8304, max=58882, avg=34321.93, stdev=5177.07 00:24:05.810 lat (usec): min=8313, max=58892, avg=34346.97, stdev=5178.96 00:24:05.810 clat percentiles (usec): 00:24:05.810 | 1.00th=[18744], 5.00th=[22414], 10.00th=[31327], 20.00th=[33817], 00:24:05.810 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34866], 60.00th=[34866], 00:24:05.810 | 70.00th=[34866], 80.00th=[35390], 90.00th=[37487], 95.00th=[41157], 00:24:05.810 | 99.00th=[50070], 99.50th=[52691], 99.90th=[58459], 99.95th=[58459], 00:24:05.810 | 99.99th=[58983] 00:24:05.810 bw ( KiB/s): min= 1792, max= 2072, per=4.24%, avg=1850.80, stdev=87.80, samples=20 00:24:05.810 iops : min= 448, max= 518, avg=462.70, stdev=21.95, samples=20 00:24:05.810 lat (msec) : 10=0.26%, 20=1.44%, 50=97.09%, 100=1.21% 00:24:05.810 cpu : usr=95.95%, sys=2.42%, ctx=165, majf=0, minf=43 00:24:05.810 IO depths : 1=4.0%, 2=8.6%, 4=21.7%, 8=56.9%, 16=8.7%, 32=0.0%, >=64=0.0% 00:24:05.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 issued rwts: total=4641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.810 filename2: (groupid=0, jobs=1): err= 0: pid=2909825: Wed May 15 11:04:20 2024 00:24:05.810 read: IOPS=455, BW=1821KiB/s (1865kB/s)(17.8MiB/10017msec) 00:24:05.810 slat (usec): min=10, max=116, avg=37.23, stdev=13.89 00:24:05.810 clat (usec): min=24794, max=70640, avg=34811.71, stdev=1796.65 00:24:05.810 lat (usec): min=24832, max=70658, avg=34848.93, stdev=1795.27 00:24:05.810 clat percentiles (usec): 00:24:05.810 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33817], 20.00th=[33817], 00:24:05.810 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.810 | 70.00th=[34866], 80.00th=[35390], 90.00th=[35914], 95.00th=[37487], 00:24:05.810 | 99.00th=[41157], 99.50th=[43779], 99.90th=[47973], 99.95th=[70779], 00:24:05.810 | 99.99th=[70779] 00:24:05.810 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1817.40, stdev=67.05, samples=20 00:24:05.810 iops : min= 416, max= 480, avg=454.35, stdev=16.76, samples=20 00:24:05.810 lat (msec) : 50=99.93%, 100=0.07% 00:24:05.810 cpu : usr=94.15%, sys=2.88%, ctx=135, majf=0, minf=56 00:24:05.810 IO depths : 1=6.1%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.4%, 32=0.0%, >=64=0.0% 00:24:05.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.810 filename2: (groupid=0, jobs=1): err= 0: pid=2909826: Wed May 15 11:04:20 2024 00:24:05.810 read: IOPS=459, BW=1839KiB/s (1883kB/s)(18.0MiB/10008msec) 00:24:05.810 slat (usec): min=8, max=154, avg=37.12, stdev=15.89 00:24:05.810 clat (usec): min=9411, max=57064, avg=34488.01, stdev=3132.01 00:24:05.810 lat (usec): min=9452, max=57097, avg=34525.14, stdev=3132.59 00:24:05.810 clat percentiles (usec): 00:24:05.810 | 1.00th=[21627], 5.00th=[32375], 10.00th=[33424], 20.00th=[33817], 00:24:05.810 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.810 | 70.00th=[34866], 80.00th=[35390], 90.00th=[35914], 95.00th=[37487], 00:24:05.810 | 99.00th=[45876], 99.50th=[50594], 99.90th=[56886], 99.95th=[56886], 00:24:05.810 | 99.99th=[56886] 00:24:05.810 bw ( KiB/s): min= 1536, max= 2016, per=4.21%, avg=1835.79, stdev=104.08, samples=19 00:24:05.810 iops : min= 384, max= 504, avg=458.95, stdev=26.02, samples=19 00:24:05.810 lat (msec) : 10=0.35%, 20=0.26%, 50=98.78%, 100=0.61% 00:24:05.810 cpu : usr=92.14%, sys=3.71%, ctx=215, majf=0, minf=38 00:24:05.810 IO depths : 1=5.3%, 2=11.2%, 4=23.8%, 8=52.5%, 16=7.2%, 32=0.0%, >=64=0.0% 00:24:05.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.810 issued rwts: total=4600,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.810 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.810 filename2: (groupid=0, jobs=1): err= 0: pid=2909827: Wed May 15 11:04:20 2024 00:24:05.810 read: IOPS=455, BW=1823KiB/s (1867kB/s)(17.8MiB/10006msec) 00:24:05.811 slat (nsec): min=8496, max=92985, avg=36109.72, stdev=13685.47 00:24:05.811 clat (usec): min=10105, max=65019, avg=34765.97, stdev=2650.77 00:24:05.811 lat (usec): min=10149, max=65063, avg=34802.08, stdev=2650.31 00:24:05.811 clat percentiles (usec): 00:24:05.811 | 1.00th=[32375], 5.00th=[33162], 10.00th=[33817], 20.00th=[33817], 00:24:05.811 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.811 | 70.00th=[34866], 80.00th=[35390], 90.00th=[35914], 95.00th=[36963], 00:24:05.811 | 99.00th=[41681], 99.50th=[43779], 99.90th=[64750], 99.95th=[64750], 00:24:05.811 | 99.99th=[65274] 00:24:05.811 bw ( KiB/s): min= 1539, max= 1920, per=4.17%, avg=1819.11, stdev=100.31, samples=19 00:24:05.811 iops : min= 384, max= 480, avg=454.74, stdev=25.19, samples=19 00:24:05.811 lat (msec) : 20=0.35%, 50=99.30%, 100=0.35% 00:24:05.811 cpu : usr=98.01%, sys=1.59%, ctx=17, majf=0, minf=43 00:24:05.811 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:24:05.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.811 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.811 issued rwts: total=4560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.811 filename2: (groupid=0, jobs=1): err= 0: pid=2909828: Wed May 15 11:04:20 2024 00:24:05.811 read: IOPS=461, BW=1845KiB/s (1889kB/s)(18.0MiB/10016msec) 00:24:05.811 slat (usec): min=8, max=113, avg=27.97, stdev=14.56 00:24:05.811 clat (usec): min=9451, max=62703, avg=34477.26, stdev=4134.75 00:24:05.811 lat (usec): min=9469, max=62723, avg=34505.23, stdev=4135.78 00:24:05.811 clat percentiles (usec): 00:24:05.811 | 1.00th=[16712], 5.00th=[30278], 10.00th=[33162], 20.00th=[33817], 00:24:05.811 | 30.00th=[34341], 40.00th=[34341], 50.00th=[34341], 60.00th=[34866], 00:24:05.811 | 70.00th=[34866], 80.00th=[35390], 90.00th=[36439], 95.00th=[38011], 00:24:05.811 | 99.00th=[47449], 99.50th=[58459], 99.90th=[60556], 99.95th=[62653], 00:24:05.811 | 99.99th=[62653] 00:24:05.811 bw ( KiB/s): min= 1664, max= 1968, per=4.22%, avg=1841.00, stdev=72.74, samples=20 00:24:05.811 iops : min= 416, max= 492, avg=460.25, stdev=18.19, samples=20 00:24:05.811 lat (msec) : 10=0.13%, 20=1.88%, 50=97.27%, 100=0.71% 00:24:05.811 cpu : usr=97.87%, sys=1.60%, ctx=51, majf=0, minf=58 00:24:05.811 IO depths : 1=3.6%, 2=7.3%, 4=19.5%, 8=60.0%, 16=9.5%, 32=0.0%, >=64=0.0% 00:24:05.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.811 complete : 0=0.0%, 4=93.2%, 8=1.7%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:05.811 issued rwts: total=4619,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:05.811 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:05.811 00:24:05.811 Run status group 0 (all jobs): 00:24:05.811 READ: bw=42.6MiB/s (44.7MB/s), 1774KiB/s-1867KiB/s (1816kB/s-1912kB/s), io=427MiB (448MB), run=10002-10031msec 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 bdev_null0 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 [2024-05-15 11:04:20.577017] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 bdev_null1 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.811 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.812 { 00:24:05.812 "params": { 00:24:05.812 "name": "Nvme$subsystem", 00:24:05.812 "trtype": "$TEST_TRANSPORT", 00:24:05.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.812 "adrfam": "ipv4", 00:24:05.812 "trsvcid": "$NVMF_PORT", 00:24:05.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.812 "hdgst": ${hdgst:-false}, 00:24:05.812 "ddgst": ${ddgst:-false} 00:24:05.812 }, 00:24:05.812 "method": "bdev_nvme_attach_controller" 00:24:05.812 } 00:24:05.812 EOF 00:24:05.812 )") 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # shift 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libasan 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:05.812 { 00:24:05.812 "params": { 00:24:05.812 "name": "Nvme$subsystem", 00:24:05.812 "trtype": "$TEST_TRANSPORT", 00:24:05.812 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:05.812 "adrfam": "ipv4", 00:24:05.812 "trsvcid": "$NVMF_PORT", 00:24:05.812 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:05.812 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:05.812 "hdgst": ${hdgst:-false}, 00:24:05.812 "ddgst": ${ddgst:-false} 00:24:05.812 }, 00:24:05.812 "method": "bdev_nvme_attach_controller" 00:24:05.812 } 00:24:05.812 EOF 00:24:05.812 )") 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:05.812 "params": { 00:24:05.812 "name": "Nvme0", 00:24:05.812 "trtype": "tcp", 00:24:05.812 "traddr": "10.0.0.2", 00:24:05.812 "adrfam": "ipv4", 00:24:05.812 "trsvcid": "4420", 00:24:05.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:05.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:05.812 "hdgst": false, 00:24:05.812 "ddgst": false 00:24:05.812 }, 00:24:05.812 "method": "bdev_nvme_attach_controller" 00:24:05.812 },{ 00:24:05.812 "params": { 00:24:05.812 "name": "Nvme1", 00:24:05.812 "trtype": "tcp", 00:24:05.812 "traddr": "10.0.0.2", 00:24:05.812 "adrfam": "ipv4", 00:24:05.812 "trsvcid": "4420", 00:24:05.812 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:05.812 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:05.812 "hdgst": false, 00:24:05.812 "ddgst": false 00:24:05.812 }, 00:24:05.812 "method": "bdev_nvme_attach_controller" 00:24:05.812 }' 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:05.812 11:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:05.812 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:05.812 ... 00:24:05.812 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:05.812 ... 00:24:05.812 fio-3.35 00:24:05.812 Starting 4 threads 00:24:05.812 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.079 00:24:11.079 filename0: (groupid=0, jobs=1): err= 0: pid=2911206: Wed May 15 11:04:26 2024 00:24:11.079 read: IOPS=1953, BW=15.3MiB/s (16.0MB/s)(76.3MiB/5001msec) 00:24:11.079 slat (nsec): min=4540, max=49253, avg=14190.77, stdev=4740.55 00:24:11.079 clat (usec): min=2224, max=7751, avg=4052.95, stdev=529.52 00:24:11.079 lat (usec): min=2233, max=7763, avg=4067.15, stdev=529.21 00:24:11.079 clat percentiles (usec): 00:24:11.079 | 1.00th=[ 3097], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3752], 00:24:11.079 | 30.00th=[ 3851], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:24:11.079 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4686], 95.00th=[ 5407], 00:24:11.079 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 6521], 99.95th=[ 6652], 00:24:11.079 | 99.99th=[ 7767] 00:24:11.079 bw ( KiB/s): min=15440, max=15856, per=24.92%, avg=15623.11, stdev=131.48, samples=9 00:24:11.079 iops : min= 1930, max= 1982, avg=1952.89, stdev=16.44, samples=9 00:24:11.079 lat (msec) : 4=56.48%, 10=43.52% 00:24:11.079 cpu : usr=92.34%, sys=7.02%, ctx=8, majf=0, minf=35 00:24:11.079 IO depths : 1=0.2%, 2=2.3%, 4=69.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.079 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.079 issued rwts: total=9767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.079 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.079 filename0: (groupid=0, jobs=1): err= 0: pid=2911207: Wed May 15 11:04:26 2024 00:24:11.079 read: IOPS=1954, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5004msec) 00:24:11.079 slat (nsec): min=4438, max=67638, avg=11675.03, stdev=4163.03 00:24:11.079 clat (usec): min=2213, max=6716, avg=4056.55, stdev=590.80 00:24:11.079 lat (usec): min=2222, max=6743, avg=4068.22, stdev=590.91 00:24:11.079 clat percentiles (usec): 00:24:11.079 | 1.00th=[ 2868], 5.00th=[ 3326], 10.00th=[ 3556], 20.00th=[ 3752], 00:24:11.079 | 30.00th=[ 3851], 40.00th=[ 3916], 50.00th=[ 3982], 60.00th=[ 4015], 00:24:11.079 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4817], 95.00th=[ 5669], 00:24:11.079 | 99.00th=[ 5997], 99.50th=[ 6128], 99.90th=[ 6521], 99.95th=[ 6652], 00:24:11.079 | 99.99th=[ 6718] 00:24:11.079 bw ( KiB/s): min=15152, max=16144, per=24.94%, avg=15640.00, stdev=359.22, samples=10 00:24:11.079 iops : min= 1894, max= 2018, avg=1955.00, stdev=44.90, samples=10 00:24:11.079 lat (msec) : 4=56.04%, 10=43.96% 00:24:11.079 cpu : usr=92.92%, sys=6.14%, ctx=60, majf=0, minf=78 00:24:11.079 IO depths : 1=0.6%, 2=4.5%, 4=68.0%, 8=26.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.079 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.079 issued rwts: total=9778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.079 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.079 filename1: (groupid=0, jobs=1): err= 0: pid=2911208: Wed May 15 11:04:26 2024 00:24:11.079 read: IOPS=1941, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5003msec) 00:24:11.079 slat (usec): min=4, max=240, avg=11.87, stdev= 4.60 00:24:11.079 clat (usec): min=2468, max=47982, avg=4084.49, stdev=1357.40 00:24:11.079 lat (usec): min=2476, max=47996, avg=4096.36, stdev=1357.26 00:24:11.079 clat percentiles (usec): 00:24:11.079 | 1.00th=[ 3097], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3752], 00:24:11.079 | 30.00th=[ 3818], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:24:11.079 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4752], 95.00th=[ 5145], 00:24:11.079 | 99.00th=[ 5997], 99.50th=[ 6390], 99.90th=[ 6587], 99.95th=[47973], 00:24:11.079 | 99.99th=[47973] 00:24:11.079 bw ( KiB/s): min=14204, max=16256, per=24.79%, avg=15542.00, stdev=611.27, samples=10 00:24:11.079 iops : min= 1775, max= 2032, avg=1942.70, stdev=76.53, samples=10 00:24:11.079 lat (msec) : 4=52.04%, 10=47.87%, 50=0.08% 00:24:11.079 cpu : usr=92.76%, sys=5.82%, ctx=128, majf=0, minf=39 00:24:11.079 IO depths : 1=0.1%, 2=1.6%, 4=69.4%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.079 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.079 issued rwts: total=9715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.079 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.079 filename1: (groupid=0, jobs=1): err= 0: pid=2911209: Wed May 15 11:04:26 2024 00:24:11.079 read: IOPS=1990, BW=15.6MiB/s (16.3MB/s)(77.8MiB/5002msec) 00:24:11.079 slat (nsec): min=4078, max=49226, avg=11288.56, stdev=3819.99 00:24:11.079 clat (usec): min=1257, max=6461, avg=3983.08, stdev=496.32 00:24:11.079 lat (usec): min=1265, max=6469, avg=3994.37, stdev=496.26 00:24:11.079 clat percentiles (usec): 00:24:11.079 | 1.00th=[ 2868], 5.00th=[ 3326], 10.00th=[ 3523], 20.00th=[ 3720], 00:24:11.079 | 30.00th=[ 3785], 40.00th=[ 3949], 50.00th=[ 3982], 60.00th=[ 4015], 00:24:11.079 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4490], 95.00th=[ 4948], 00:24:11.079 | 99.00th=[ 5932], 99.50th=[ 6194], 99.90th=[ 6390], 99.95th=[ 6390], 00:24:11.079 | 99.99th=[ 6456] 00:24:11.079 bw ( KiB/s): min=15392, max=16288, per=25.46%, avg=15966.22, stdev=312.66, samples=9 00:24:11.079 iops : min= 1924, max= 2036, avg=1995.78, stdev=39.08, samples=9 00:24:11.079 lat (msec) : 2=0.03%, 4=54.32%, 10=45.65% 00:24:11.079 cpu : usr=94.58%, sys=4.90%, ctx=14, majf=0, minf=45 00:24:11.079 IO depths : 1=0.1%, 2=3.0%, 4=70.0%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:11.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.080 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.080 issued rwts: total=9958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.080 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:11.080 00:24:11.080 Run status group 0 (all jobs): 00:24:11.080 READ: bw=61.2MiB/s (64.2MB/s), 15.2MiB/s-15.6MiB/s (15.9MB/s-16.3MB/s), io=306MiB (321MB), run=5001-5004msec 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.080 00:24:11.080 real 0m24.032s 00:24:11.080 user 4m27.879s 00:24:11.080 sys 0m8.705s 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 ************************************ 00:24:11.080 END TEST fio_dif_rand_params 00:24:11.080 ************************************ 00:24:11.080 11:04:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:11.080 11:04:26 nvmf_dif -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:11.080 11:04:26 nvmf_dif -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 ************************************ 00:24:11.080 START TEST fio_dif_digest 00:24:11.080 ************************************ 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1121 -- # fio_dif_digest 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 bdev_null0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:11.080 [2024-05-15 11:04:26.939437] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:11.080 { 00:24:11.080 "params": { 00:24:11.080 "name": "Nvme$subsystem", 00:24:11.080 "trtype": "$TEST_TRANSPORT", 00:24:11.080 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:11.080 "adrfam": "ipv4", 00:24:11.080 "trsvcid": "$NVMF_PORT", 00:24:11.080 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:11.080 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:11.080 "hdgst": ${hdgst:-false}, 00:24:11.080 "ddgst": ${ddgst:-false} 00:24:11.080 }, 00:24:11.080 "method": "bdev_nvme_attach_controller" 00:24:11.080 } 00:24:11.080 EOF 00:24:11.080 )") 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1333 -- # local fio_dir=/usr/src/fio 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local sanitizers 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # shift 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local asan_lib= 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libasan 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:11.080 "params": { 00:24:11.080 "name": "Nvme0", 00:24:11.080 "trtype": "tcp", 00:24:11.080 "traddr": "10.0.0.2", 00:24:11.080 "adrfam": "ipv4", 00:24:11.080 "trsvcid": "4420", 00:24:11.080 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.080 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:11.080 "hdgst": true, 00:24:11.080 "ddgst": true 00:24:11.080 }, 00:24:11.080 "method": "bdev_nvme_attach_controller" 00:24:11.080 }' 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # grep libclang_rt.asan 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # awk '{print $3}' 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # asan_lib= 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # [[ -n '' ]] 00:24:11.080 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:24:11.081 11:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:11.081 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:11.081 ... 00:24:11.081 fio-3.35 00:24:11.081 Starting 3 threads 00:24:11.081 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.346 00:24:23.346 filename0: (groupid=0, jobs=1): err= 0: pid=2911964: Wed May 15 11:04:37 2024 00:24:23.346 read: IOPS=151, BW=18.9MiB/s (19.8MB/s)(189MiB/10005msec) 00:24:23.346 slat (nsec): min=7754, max=62151, avg=14599.58, stdev=4955.43 00:24:23.346 clat (usec): min=11109, max=61888, avg=19816.36, stdev=3513.27 00:24:23.346 lat (usec): min=11121, max=61906, avg=19830.96, stdev=3513.28 00:24:23.346 clat percentiles (usec): 00:24:23.346 | 1.00th=[12387], 5.00th=[15533], 10.00th=[16319], 20.00th=[17695], 00:24:23.346 | 30.00th=[18744], 40.00th=[19530], 50.00th=[20055], 60.00th=[20579], 00:24:23.346 | 70.00th=[21103], 80.00th=[21365], 90.00th=[22152], 95.00th=[22938], 00:24:23.346 | 99.00th=[24249], 99.50th=[25560], 99.90th=[62129], 99.95th=[62129], 00:24:23.346 | 99.99th=[62129] 00:24:23.346 bw ( KiB/s): min=17408, max=20736, per=40.43%, avg=19342.60, stdev=846.96, samples=20 00:24:23.346 iops : min= 136, max= 162, avg=151.10, stdev= 6.63, samples=20 00:24:23.346 lat (msec) : 20=48.91%, 50=50.69%, 100=0.40% 00:24:23.346 cpu : usr=90.92%, sys=8.43%, ctx=33, majf=0, minf=195 00:24:23.346 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:23.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.346 issued rwts: total=1513,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:23.346 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:23.346 filename0: (groupid=0, jobs=1): err= 0: pid=2911965: Wed May 15 11:04:37 2024 00:24:23.346 read: IOPS=106, BW=13.3MiB/s (14.0MB/s)(133MiB/10018msec) 00:24:23.346 slat (nsec): min=7637, max=43827, avg=13686.22, stdev=4287.61 00:24:23.346 clat (msec): min=9, max=106, avg=28.14, stdev=13.25 00:24:23.346 lat (msec): min=9, max=106, avg=28.16, stdev=13.26 00:24:23.346 clat percentiles (msec): 00:24:23.346 | 1.00th=[ 12], 5.00th=[ 21], 10.00th=[ 22], 20.00th=[ 23], 00:24:23.346 | 30.00th=[ 24], 40.00th=[ 24], 50.00th=[ 25], 60.00th=[ 26], 00:24:23.346 | 70.00th=[ 26], 80.00th=[ 27], 90.00th=[ 31], 95.00th=[ 66], 00:24:23.346 | 99.00th=[ 69], 99.50th=[ 70], 99.90th=[ 107], 99.95th=[ 107], 00:24:23.346 | 99.99th=[ 107] 00:24:23.346 bw ( KiB/s): min=10240, max=16896, per=28.47%, avg=13619.20, stdev=1817.04, samples=20 00:24:23.346 iops : min= 80, max= 132, avg=106.40, stdev=14.20, samples=20 00:24:23.346 lat (msec) : 10=0.09%, 20=5.15%, 50=85.19%, 100=9.09%, 250=0.47% 00:24:23.346 cpu : usr=93.01%, sys=6.52%, ctx=35, majf=0, minf=160 00:24:23.346 IO depths : 1=1.5%, 2=98.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:23.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.346 issued rwts: total=1067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:23.346 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:23.346 filename0: (groupid=0, jobs=1): err= 0: pid=2911966: Wed May 15 11:04:37 2024 00:24:23.346 read: IOPS=116, BW=14.5MiB/s (15.2MB/s)(146MiB/10008msec) 00:24:23.346 slat (nsec): min=6214, max=41494, avg=13526.37, stdev=4219.53 00:24:23.346 clat (msec): min=8, max=102, avg=25.77, stdev=10.66 00:24:23.346 lat (msec): min=8, max=102, avg=25.78, stdev=10.66 00:24:23.346 clat percentiles (msec): 00:24:23.346 | 1.00th=[ 12], 5.00th=[ 19], 10.00th=[ 21], 20.00th=[ 22], 00:24:23.346 | 30.00th=[ 23], 40.00th=[ 23], 50.00th=[ 24], 60.00th=[ 25], 00:24:23.346 | 70.00th=[ 25], 80.00th=[ 26], 90.00th=[ 28], 95.00th=[ 62], 00:24:23.346 | 99.00th=[ 67], 99.50th=[ 68], 99.90th=[ 103], 99.95th=[ 103], 00:24:23.346 | 99.99th=[ 103] 00:24:23.346 bw ( KiB/s): min=11520, max=17408, per=31.06%, avg=14860.80, stdev=1475.23, samples=20 00:24:23.346 iops : min= 90, max= 136, avg=116.10, stdev=11.53, samples=20 00:24:23.346 lat (msec) : 10=0.09%, 20=9.02%, 50=84.71%, 100=5.93%, 250=0.26% 00:24:23.346 cpu : usr=91.75%, sys=7.72%, ctx=22, majf=0, minf=52 00:24:23.346 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:23.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:23.346 issued rwts: total=1164,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:23.346 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:23.346 00:24:23.346 Run status group 0 (all jobs): 00:24:23.346 READ: bw=46.7MiB/s (49.0MB/s), 13.3MiB/s-18.9MiB/s (14.0MB/s-19.8MB/s), io=468MiB (491MB), run=10005-10018msec 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.346 00:24:23.346 real 0m11.090s 00:24:23.346 user 0m28.677s 00:24:23.346 sys 0m2.542s 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:23.346 11:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:23.346 ************************************ 00:24:23.346 END TEST fio_dif_digest 00:24:23.346 ************************************ 00:24:23.346 11:04:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:23.346 11:04:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.346 rmmod nvme_tcp 00:24:23.346 rmmod nvme_fabrics 00:24:23.346 rmmod nvme_keyring 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2906022 ']' 00:24:23.346 11:04:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2906022 00:24:23.346 11:04:38 nvmf_dif -- common/autotest_common.sh@946 -- # '[' -z 2906022 ']' 00:24:23.346 11:04:38 nvmf_dif -- common/autotest_common.sh@950 -- # kill -0 2906022 00:24:23.346 11:04:38 nvmf_dif -- common/autotest_common.sh@951 -- # uname 00:24:23.346 11:04:38 nvmf_dif -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:24:23.346 11:04:38 nvmf_dif -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2906022 00:24:23.346 11:04:38 nvmf_dif -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:24:23.347 11:04:38 nvmf_dif -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:24:23.347 11:04:38 nvmf_dif -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2906022' 00:24:23.347 killing process with pid 2906022 00:24:23.347 11:04:38 nvmf_dif -- common/autotest_common.sh@965 -- # kill 2906022 00:24:23.347 [2024-05-15 11:04:38.108187] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:23.347 11:04:38 nvmf_dif -- common/autotest_common.sh@970 -- # wait 2906022 00:24:23.347 11:04:38 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:24:23.347 11:04:38 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:23.605 Waiting for block devices as requested 00:24:23.605 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:23.605 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:23.605 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:23.863 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:23.863 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:23.863 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:23.863 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:24.122 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:24.122 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:24.122 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:24.122 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:24.380 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:24.380 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:24.380 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:24.639 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:24.639 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:24.639 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:24.639 11:04:40 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:24.639 11:04:40 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:24.639 11:04:40 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:24.639 11:04:40 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:24.639 11:04:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.639 11:04:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:24.639 11:04:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.169 11:04:42 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:27.169 00:24:27.169 real 1m7.298s 00:24:27.169 user 6m18.562s 00:24:27.169 sys 0m22.950s 00:24:27.169 11:04:42 nvmf_dif -- common/autotest_common.sh@1122 -- # xtrace_disable 00:24:27.169 11:04:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:27.169 ************************************ 00:24:27.169 END TEST nvmf_dif 00:24:27.169 ************************************ 00:24:27.169 11:04:42 -- spdk/autotest.sh@12 -- # hostname 00:24:27.169 11:04:42 -- spdk/autotest.sh@12 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_dif.info 00:24:27.169 geninfo: WARNING: invalid characters removed from testname! 00:24:53.705 11:05:09 -- spdk/autotest.sh@13 -- # echo '### URING mentions in coverage after the test ###:' 00:24:53.705 ### URING mentions in coverage after the test ###: 00:24:53.705 11:05:09 -- spdk/autotest.sh@14 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_dif.info 00:24:53.705 11:05:09 -- spdk/autotest.sh@14 -- # grep -i uring 00:24:53.705 11:05:09 -- spdk/autotest.sh@15 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_dif.info 00:24:53.705 11:05:09 -- spdk/autotest.sh@302 -- # run_test_wrapper nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:53.705 11:05:09 -- spdk/autotest.sh@10 -- # local test_name=nvmf_abort_qd_sizes 00:24:53.705 11:05:09 -- spdk/autotest.sh@11 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:53.705 11:05:09 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:53.705 11:05:09 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:53.705 11:05:09 -- common/autotest_common.sh@10 -- # set +x 00:24:53.705 ************************************ 00:24:53.705 START TEST nvmf_abort_qd_sizes 00:24:53.705 ************************************ 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:53.705 * Looking for test storage... 00:24:53.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.705 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.963 11:05:09 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:24:53.964 11:05:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.496 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:56.497 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:56.497 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:56.497 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:56.497 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.497 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.497 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:24:56.497 00:24:56.497 --- 10.0.0.2 ping statistics --- 00:24:56.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.497 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.497 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.497 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:24:56.497 00:24:56.497 --- 10.0.0.1 ping statistics --- 00:24:56.497 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.497 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:24:56.497 11:05:12 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:57.872 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:57.872 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:57.872 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:57.872 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:57.872 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:57.872 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:57.872 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:57.872 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:57.872 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:57.872 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:57.872 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:57.872 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:57.872 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:57.872 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:57.872 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:57.873 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:58.809 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@720 -- # xtrace_disable 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2920905 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2920905 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@827 -- # '[' -z 2920905 ']' 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local max_retries=100 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # xtrace_disable 00:24:58.809 11:05:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:58.809 [2024-05-15 11:05:14.955320] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:24:58.809 [2024-05-15 11:05:14.955418] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.809 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.809 [2024-05-15 11:05:15.037337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:59.067 [2024-05-15 11:05:15.156304] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:59.067 [2024-05-15 11:05:15.156373] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:59.067 [2024-05-15 11:05:15.156389] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:59.067 [2024-05-15 11:05:15.156402] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:59.067 [2024-05-15 11:05:15.156414] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:59.067 [2024-05-15 11:05:15.156501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.067 [2024-05-15 11:05:15.156555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:59.067 [2024-05-15 11:05:15.156668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:59.067 [2024-05-15 11:05:15.156671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # return 0 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:24:59.691 11:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:59.949 ************************************ 00:24:59.949 START TEST spdk_target_abort 00:24:59.949 ************************************ 00:24:59.949 11:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1121 -- # spdk_target 00:24:59.949 11:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:59.949 11:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:24:59.949 11:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.949 11:05:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:03.225 spdk_targetn1 00:25:03.225 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.225 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.225 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.225 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:03.225 [2024-05-15 11:05:18.786516] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.225 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:03.226 [2024-05-15 11:05:18.818523] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:03.226 [2024-05-15 11:05:18.818794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:03.226 11:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:03.226 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.502 Initializing NVMe Controllers 00:25:06.502 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:06.502 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:06.502 Initialization complete. Launching workers. 00:25:06.502 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 7592, failed: 0 00:25:06.502 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1309, failed to submit 6283 00:25:06.502 success 838, unsuccess 471, failed 0 00:25:06.502 11:05:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:06.502 11:05:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:06.502 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.778 Initializing NVMe Controllers 00:25:09.778 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:09.778 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:09.778 Initialization complete. Launching workers. 00:25:09.778 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8799, failed: 0 00:25:09.778 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1265, failed to submit 7534 00:25:09.778 success 325, unsuccess 940, failed 0 00:25:09.778 11:05:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:09.778 11:05:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:09.778 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.060 Initializing NVMe Controllers 00:25:13.060 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:25:13.060 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:13.060 Initialization complete. Launching workers. 00:25:13.060 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30893, failed: 0 00:25:13.060 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2649, failed to submit 28244 00:25:13.060 success 526, unsuccess 2123, failed 0 00:25:13.060 11:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:25:13.060 11:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.060 11:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:13.060 11:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.060 11:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:25:13.060 11:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:13.060 11:05:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2920905 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@946 -- # '[' -z 2920905 ']' 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # kill -0 2920905 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # uname 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2920905 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2920905' 00:25:13.995 killing process with pid 2920905 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # kill 2920905 00:25:13.995 [2024-05-15 11:05:29.929347] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:13.995 11:05:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # wait 2920905 00:25:13.995 00:25:13.995 real 0m14.280s 00:25:13.995 user 0m55.911s 00:25:13.995 sys 0m2.857s 00:25:13.995 11:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:13.995 11:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:13.995 ************************************ 00:25:13.995 END TEST spdk_target_abort 00:25:13.995 ************************************ 00:25:14.254 11:05:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:25:14.254 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:25:14.254 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # xtrace_disable 00:25:14.254 11:05:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:14.254 ************************************ 00:25:14.254 START TEST kernel_target_abort 00:25:14.254 ************************************ 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1121 -- # kernel_target 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@728 -- # local ip 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # ip_candidates=() 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@729 -- # local -A ip_candidates 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@731 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@732 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z tcp ]] 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@734 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@735 -- # ip=NVMF_INITIATOR_IP 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@737 -- # [[ -z 10.0.0.1 ]] 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # echo 10.0.0.1 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:14.254 11:05:30 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:15.629 Waiting for block devices as requested 00:25:15.629 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:25:15.629 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:15.629 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:15.888 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:15.888 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:15.888 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:15.888 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:16.146 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:16.146 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:16.146 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:16.146 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:16.404 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:16.404 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:16.404 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:16.404 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:16.404 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:16.663 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1658 -- # local device=nvme0n1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ none != none ]] 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:16.663 No valid GPT data, bailing 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:25:16.663 00:25:16.663 Discovery Log Number of Records 2, Generation counter 2 00:25:16.663 =====Discovery Log Entry 0====== 00:25:16.663 trtype: tcp 00:25:16.663 adrfam: ipv4 00:25:16.663 subtype: current discovery subsystem 00:25:16.663 treq: not specified, sq flow control disable supported 00:25:16.663 portid: 1 00:25:16.663 trsvcid: 4420 00:25:16.663 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:16.663 traddr: 10.0.0.1 00:25:16.663 eflags: none 00:25:16.663 sectype: none 00:25:16.663 =====Discovery Log Entry 1====== 00:25:16.663 trtype: tcp 00:25:16.663 adrfam: ipv4 00:25:16.663 subtype: nvme subsystem 00:25:16.663 treq: not specified, sq flow control disable supported 00:25:16.663 portid: 1 00:25:16.663 trsvcid: 4420 00:25:16.663 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:16.663 traddr: 10.0.0.1 00:25:16.663 eflags: none 00:25:16.663 sectype: none 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:16.663 11:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:16.663 EAL: No free 2048 kB hugepages reported on node 1 00:25:19.944 Initializing NVMe Controllers 00:25:19.944 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:19.944 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:19.944 Initialization complete. Launching workers. 00:25:19.944 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 23756, failed: 0 00:25:19.944 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23756, failed to submit 0 00:25:19.944 success 0, unsuccess 23756, failed 0 00:25:19.944 11:05:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:19.944 11:05:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:19.944 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.231 Initializing NVMe Controllers 00:25:23.231 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:23.231 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:23.231 Initialization complete. Launching workers. 00:25:23.231 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 50311, failed: 0 00:25:23.231 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12650, failed to submit 37661 00:25:23.231 success 0, unsuccess 12650, failed 0 00:25:23.231 11:05:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:25:23.231 11:05:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:23.231 EAL: No free 2048 kB hugepages reported on node 1 00:25:26.515 Initializing NVMe Controllers 00:25:26.515 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:26.515 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:25:26.515 Initialization complete. Launching workers. 00:25:26.515 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51750, failed: 0 00:25:26.515 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 12890, failed to submit 38860 00:25:26.515 success 0, unsuccess 12890, failed 0 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:26.515 11:05:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:27.449 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:27.449 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:27.449 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:27.449 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:27.449 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:27.449 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:27.449 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:27.449 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:27.449 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:27.449 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:27.449 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:27.449 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:27.449 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:27.449 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:27.449 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:27.449 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:28.386 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:28.645 00:25:28.645 real 0m14.356s 00:25:28.645 user 0m4.290s 00:25:28.645 sys 0m3.526s 00:25:28.645 11:05:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:28.645 11:05:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:28.645 ************************************ 00:25:28.645 END TEST kernel_target_abort 00:25:28.645 ************************************ 00:25:28.645 11:05:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:28.646 rmmod nvme_tcp 00:25:28.646 rmmod nvme_fabrics 00:25:28.646 rmmod nvme_keyring 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2920905 ']' 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2920905 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@946 -- # '[' -z 2920905 ']' 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # kill -0 2920905 00:25:28.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 950: kill: (2920905) - No such process 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@973 -- # echo 'Process with pid 2920905 is not found' 00:25:28.646 Process with pid 2920905 is not found 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:25:28.646 11:05:44 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:30.021 Waiting for block devices as requested 00:25:30.021 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:25:30.021 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:30.021 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:30.021 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:30.021 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:30.021 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:30.280 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:30.280 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:30.280 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:30.280 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:25:30.539 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:25:30.539 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:25:30.539 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:25:30.539 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:25:30.797 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:25:30.797 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:25:30.797 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:25:31.055 11:05:47 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:31.055 11:05:47 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:31.055 11:05:47 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:31.055 11:05:47 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:31.055 11:05:47 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.055 11:05:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:31.055 11:05:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:32.959 11:05:49 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:32.959 00:25:32.959 real 0m39.192s 00:25:32.959 user 1m2.689s 00:25:32.959 sys 0m10.097s 00:25:32.959 11:05:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # xtrace_disable 00:25:32.959 11:05:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:32.959 ************************************ 00:25:32.959 END TEST nvmf_abort_qd_sizes 00:25:32.959 ************************************ 00:25:32.959 11:05:49 -- spdk/autotest.sh@12 -- # hostname 00:25:32.959 11:05:49 -- spdk/autotest.sh@12 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_abort_qd_sizes.info 00:25:33.216 geninfo: WARNING: invalid characters removed from testname! 00:26:05.349 11:06:17 -- spdk/autotest.sh@13 -- # echo '### URING mentions in coverage after the test ###:' 00:26:05.349 ### URING mentions in coverage after the test ###: 00:26:05.349 11:06:17 -- spdk/autotest.sh@14 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_abort_qd_sizes.info 00:26:05.349 11:06:17 -- spdk/autotest.sh@14 -- # grep -i uring 00:26:05.349 11:06:17 -- spdk/autotest.sh@15 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_abort_qd_sizes.info 00:26:05.349 11:06:17 -- spdk/autotest.sh@304 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:05.349 11:06:17 -- common/autotest_common.sh@1097 -- # '[' 2 -le 1 ']' 00:26:05.349 11:06:17 -- common/autotest_common.sh@1103 -- # xtrace_disable 00:26:05.349 11:06:17 -- common/autotest_common.sh@10 -- # set +x 00:26:05.349 ************************************ 00:26:05.349 START TEST keyring_file 00:26:05.349 ************************************ 00:26:05.349 11:06:17 keyring_file -- common/autotest_common.sh@1121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:26:05.349 * Looking for test storage... 00:26:05.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:05.350 11:06:17 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:05.350 11:06:17 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:05.350 11:06:17 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:05.350 11:06:17 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.350 11:06:17 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.350 11:06:17 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.350 11:06:17 keyring_file -- paths/export.sh@5 -- # export PATH 00:26:05.350 11:06:17 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@47 -- # : 0 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HSc5zw7Ho5 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HSc5zw7Ho5 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HSc5zw7Ho5 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HSc5zw7Ho5 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@17 -- # name=key1 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ZkUkhqdfs1 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:05.350 11:06:17 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ZkUkhqdfs1 00:26:05.350 11:06:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ZkUkhqdfs1 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ZkUkhqdfs1 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@30 -- # tgtpid=2931255 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:26:05.350 11:06:17 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2931255 00:26:05.350 11:06:17 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2931255 ']' 00:26:05.350 11:06:17 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.350 11:06:17 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:05.350 11:06:17 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.350 11:06:17 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:05.350 11:06:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 [2024-05-15 11:06:17.536613] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:26:05.350 [2024-05-15 11:06:17.536717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931255 ] 00:26:05.350 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.350 [2024-05-15 11:06:17.618513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.350 [2024-05-15 11:06:17.740399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:26:05.350 11:06:18 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 [2024-05-15 11:06:18.461995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:05.350 null0 00:26:05.350 [2024-05-15 11:06:18.494006] nvmf_rpc.c: 610:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:05.350 [2024-05-15 11:06:18.494072] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:05.350 [2024-05-15 11:06:18.494563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:26:05.350 [2024-05-15 11:06:18.502043] tcp.c:3662:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:05.350 11:06:18 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:05.350 11:06:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:05.350 [2024-05-15 11:06:18.510052] nvmf_rpc.c: 768:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:26:05.350 request: 00:26:05.350 { 00:26:05.350 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:26:05.350 "secure_channel": false, 00:26:05.350 "listen_address": { 00:26:05.350 "trtype": "tcp", 00:26:05.350 "traddr": "127.0.0.1", 00:26:05.350 "trsvcid": "4420" 00:26:05.351 }, 00:26:05.351 "method": "nvmf_subsystem_add_listener", 00:26:05.351 "req_id": 1 00:26:05.351 } 00:26:05.351 Got JSON-RPC error response 00:26:05.351 response: 00:26:05.351 { 00:26:05.351 "code": -32602, 00:26:05.351 "message": "Invalid parameters" 00:26:05.351 } 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:05.351 11:06:18 keyring_file -- keyring/file.sh@46 -- # bperfpid=2931399 00:26:05.351 11:06:18 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:26:05.351 11:06:18 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2931399 /var/tmp/bperf.sock 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2931399 ']' 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:05.351 11:06:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:05.351 [2024-05-15 11:06:18.556423] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:26:05.351 [2024-05-15 11:06:18.556487] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2931399 ] 00:26:05.351 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.351 [2024-05-15 11:06:18.627190] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.351 [2024-05-15 11:06:18.743119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.351 11:06:19 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:05.351 11:06:19 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:26:05.351 11:06:19 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HSc5zw7Ho5 00:26:05.351 11:06:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HSc5zw7Ho5 00:26:05.351 11:06:19 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZkUkhqdfs1 00:26:05.351 11:06:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZkUkhqdfs1 00:26:05.351 11:06:19 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:26:05.351 11:06:19 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:26:05.351 11:06:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:05.351 11:06:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:05.351 11:06:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.HSc5zw7Ho5 == \/\t\m\p\/\t\m\p\.\H\S\c\5\z\w\7\H\o\5 ]] 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.ZkUkhqdfs1 == \/\t\m\p\/\t\m\p\.\Z\k\U\k\h\q\d\f\s\1 ]] 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:26:05.351 11:06:20 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:05.351 11:06:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:05.351 [2024-05-15 11:06:21.230356] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:05.351 nvme0n1 00:26:05.351 11:06:21 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:26:05.351 11:06:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:05.351 11:06:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:05.351 11:06:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:05.351 11:06:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:05.351 11:06:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:05.609 11:06:21 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:26:05.609 11:06:21 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:26:05.609 11:06:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:05.609 11:06:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:05.609 11:06:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:05.609 11:06:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:05.609 11:06:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:05.609 11:06:21 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:26:05.609 11:06:21 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:05.866 Running I/O for 1 seconds... 00:26:06.799 00:26:06.800 Latency(us) 00:26:06.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.800 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:26:06.800 nvme0n1 : 1.02 3492.01 13.64 0.00 0.00 36293.02 6796.33 77283.93 00:26:06.800 =================================================================================================================== 00:26:06.800 Total : 3492.01 13.64 0.00 0.00 36293.02 6796.33 77283.93 00:26:06.800 0 00:26:06.800 11:06:22 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:06.800 11:06:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:07.058 11:06:23 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:26:07.058 11:06:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:07.058 11:06:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:07.058 11:06:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:07.058 11:06:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:07.058 11:06:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:07.315 11:06:23 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:26:07.315 11:06:23 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:26:07.315 11:06:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:07.315 11:06:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:07.315 11:06:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:07.315 11:06:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:07.315 11:06:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:07.571 11:06:23 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:26:07.571 11:06:23 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:07.571 11:06:23 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:07.571 11:06:23 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:07.571 11:06:23 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:26:07.571 11:06:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.571 11:06:23 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:26:07.571 11:06:23 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:07.571 11:06:23 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:07.571 11:06:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:26:07.829 [2024-05-15 11:06:23.955012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:26:07.829 [2024-05-15 11:06:23.955442] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc9220 (107): Transport endpoint is not connected 00:26:07.829 [2024-05-15 11:06:23.956431] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbc9220 (9): Bad file descriptor 00:26:07.829 [2024-05-15 11:06:23.957429] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:07.829 [2024-05-15 11:06:23.957452] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:26:07.829 [2024-05-15 11:06:23.957477] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:07.829 request: 00:26:07.829 { 00:26:07.829 "name": "nvme0", 00:26:07.829 "trtype": "tcp", 00:26:07.829 "traddr": "127.0.0.1", 00:26:07.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:07.829 "adrfam": "ipv4", 00:26:07.829 "trsvcid": "4420", 00:26:07.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:07.829 "psk": "key1", 00:26:07.829 "method": "bdev_nvme_attach_controller", 00:26:07.829 "req_id": 1 00:26:07.829 } 00:26:07.829 Got JSON-RPC error response 00:26:07.829 response: 00:26:07.829 { 00:26:07.829 "code": -32602, 00:26:07.829 "message": "Invalid parameters" 00:26:07.829 } 00:26:07.829 11:06:23 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:07.829 11:06:23 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:07.829 11:06:23 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:07.829 11:06:23 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:07.829 11:06:23 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:26:07.829 11:06:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:07.829 11:06:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:07.829 11:06:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:07.829 11:06:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:07.829 11:06:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:08.086 11:06:24 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:26:08.086 11:06:24 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:26:08.086 11:06:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:08.086 11:06:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:08.086 11:06:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:08.086 11:06:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:08.086 11:06:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:08.344 11:06:24 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:26:08.344 11:06:24 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:26:08.344 11:06:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:08.601 11:06:24 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:26:08.601 11:06:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:26:08.859 11:06:24 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:26:08.859 11:06:24 keyring_file -- keyring/file.sh@77 -- # jq length 00:26:08.859 11:06:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:09.117 11:06:25 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:26:09.117 11:06:25 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.HSc5zw7Ho5 00:26:09.117 11:06:25 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HSc5zw7Ho5 00:26:09.117 11:06:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:09.117 11:06:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HSc5zw7Ho5 00:26:09.117 11:06:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:26:09.117 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:09.117 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:26:09.117 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:09.117 11:06:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HSc5zw7Ho5 00:26:09.117 11:06:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HSc5zw7Ho5 00:26:09.375 [2024-05-15 11:06:25.447324] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HSc5zw7Ho5': 0100660 00:26:09.375 [2024-05-15 11:06:25.447365] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:26:09.375 request: 00:26:09.375 { 00:26:09.375 "name": "key0", 00:26:09.375 "path": "/tmp/tmp.HSc5zw7Ho5", 00:26:09.375 "method": "keyring_file_add_key", 00:26:09.375 "req_id": 1 00:26:09.375 } 00:26:09.375 Got JSON-RPC error response 00:26:09.375 response: 00:26:09.375 { 00:26:09.375 "code": -1, 00:26:09.375 "message": "Operation not permitted" 00:26:09.375 } 00:26:09.375 11:06:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:09.375 11:06:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:09.375 11:06:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:09.375 11:06:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:09.375 11:06:25 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.HSc5zw7Ho5 00:26:09.375 11:06:25 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HSc5zw7Ho5 00:26:09.375 11:06:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HSc5zw7Ho5 00:26:09.632 11:06:25 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.HSc5zw7Ho5 00:26:09.632 11:06:25 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:26:09.632 11:06:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:09.632 11:06:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:09.632 11:06:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:09.632 11:06:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:09.632 11:06:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:09.890 11:06:25 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:26:09.890 11:06:25 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:09.890 11:06:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:26:09.890 11:06:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:09.890 11:06:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:26:09.890 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:09.890 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:26:09.890 11:06:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:09.890 11:06:25 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:09.890 11:06:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:10.148 [2024-05-15 11:06:26.205406] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HSc5zw7Ho5': No such file or directory 00:26:10.148 [2024-05-15 11:06:26.205447] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:26:10.148 [2024-05-15 11:06:26.205474] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:26:10.148 [2024-05-15 11:06:26.205492] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:10.148 [2024-05-15 11:06:26.205504] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:26:10.148 request: 00:26:10.148 { 00:26:10.148 "name": "nvme0", 00:26:10.148 "trtype": "tcp", 00:26:10.148 "traddr": "127.0.0.1", 00:26:10.148 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:10.148 "adrfam": "ipv4", 00:26:10.148 "trsvcid": "4420", 00:26:10.148 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:10.148 "psk": "key0", 00:26:10.148 "method": "bdev_nvme_attach_controller", 00:26:10.148 "req_id": 1 00:26:10.148 } 00:26:10.148 Got JSON-RPC error response 00:26:10.148 response: 00:26:10.148 { 00:26:10.148 "code": -19, 00:26:10.148 "message": "No such device" 00:26:10.148 } 00:26:10.148 11:06:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:26:10.148 11:06:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:10.148 11:06:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:10.148 11:06:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:10.148 11:06:26 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:26:10.148 11:06:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:10.406 11:06:26 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.FFsGU40fci 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:26:10.406 11:06:26 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:26:10.406 11:06:26 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:26:10.406 11:06:26 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:26:10.406 11:06:26 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:26:10.406 11:06:26 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:26:10.406 11:06:26 keyring_file -- nvmf/common.sh@705 -- # python - 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.FFsGU40fci 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.FFsGU40fci 00:26:10.406 11:06:26 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.FFsGU40fci 00:26:10.406 11:06:26 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FFsGU40fci 00:26:10.406 11:06:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FFsGU40fci 00:26:10.663 11:06:26 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:10.663 11:06:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:10.921 nvme0n1 00:26:10.921 11:06:27 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:26:10.921 11:06:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:10.921 11:06:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:10.921 11:06:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:10.921 11:06:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:10.921 11:06:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:11.178 11:06:27 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:26:11.178 11:06:27 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:26:11.178 11:06:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:26:11.435 11:06:27 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:26:11.435 11:06:27 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:26:11.435 11:06:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:11.435 11:06:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:11.435 11:06:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:11.692 11:06:27 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:26:11.692 11:06:27 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:26:11.692 11:06:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:11.692 11:06:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:11.692 11:06:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:11.692 11:06:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:11.692 11:06:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:11.949 11:06:28 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:26:11.949 11:06:28 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:26:11.949 11:06:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:26:12.207 11:06:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:26:12.207 11:06:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:12.207 11:06:28 keyring_file -- keyring/file.sh@104 -- # jq length 00:26:12.464 11:06:28 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:26:12.464 11:06:28 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.FFsGU40fci 00:26:12.464 11:06:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.FFsGU40fci 00:26:12.721 11:06:28 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ZkUkhqdfs1 00:26:12.721 11:06:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ZkUkhqdfs1 00:26:12.979 11:06:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:12.979 11:06:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:26:13.237 nvme0n1 00:26:13.237 11:06:29 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:26:13.237 11:06:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:26:13.495 11:06:29 keyring_file -- keyring/file.sh@112 -- # config='{ 00:26:13.495 "subsystems": [ 00:26:13.495 { 00:26:13.495 "subsystem": "keyring", 00:26:13.495 "config": [ 00:26:13.495 { 00:26:13.495 "method": "keyring_file_add_key", 00:26:13.495 "params": { 00:26:13.495 "name": "key0", 00:26:13.495 "path": "/tmp/tmp.FFsGU40fci" 00:26:13.495 } 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "method": "keyring_file_add_key", 00:26:13.495 "params": { 00:26:13.495 "name": "key1", 00:26:13.495 "path": "/tmp/tmp.ZkUkhqdfs1" 00:26:13.495 } 00:26:13.495 } 00:26:13.495 ] 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "subsystem": "iobuf", 00:26:13.495 "config": [ 00:26:13.495 { 00:26:13.495 "method": "iobuf_set_options", 00:26:13.495 "params": { 00:26:13.495 "small_pool_count": 8192, 00:26:13.495 "large_pool_count": 1024, 00:26:13.495 "small_bufsize": 8192, 00:26:13.495 "large_bufsize": 135168 00:26:13.495 } 00:26:13.495 } 00:26:13.495 ] 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "subsystem": "sock", 00:26:13.495 "config": [ 00:26:13.495 { 00:26:13.495 "method": "sock_set_default_impl", 00:26:13.495 "params": { 00:26:13.495 "impl_name": "posix" 00:26:13.495 } 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "method": "sock_impl_set_options", 00:26:13.495 "params": { 00:26:13.495 "impl_name": "ssl", 00:26:13.495 "recv_buf_size": 4096, 00:26:13.495 "send_buf_size": 4096, 00:26:13.495 "enable_recv_pipe": true, 00:26:13.495 "enable_quickack": false, 00:26:13.495 "enable_placement_id": 0, 00:26:13.495 "enable_zerocopy_send_server": true, 00:26:13.495 "enable_zerocopy_send_client": false, 00:26:13.495 "zerocopy_threshold": 0, 00:26:13.495 "tls_version": 0, 00:26:13.495 "enable_ktls": false 00:26:13.495 } 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "method": "sock_impl_set_options", 00:26:13.495 "params": { 00:26:13.495 "impl_name": "posix", 00:26:13.495 "recv_buf_size": 2097152, 00:26:13.495 "send_buf_size": 2097152, 00:26:13.495 "enable_recv_pipe": true, 00:26:13.495 "enable_quickack": false, 00:26:13.495 "enable_placement_id": 0, 00:26:13.495 "enable_zerocopy_send_server": true, 00:26:13.495 "enable_zerocopy_send_client": false, 00:26:13.495 "zerocopy_threshold": 0, 00:26:13.495 "tls_version": 0, 00:26:13.495 "enable_ktls": false 00:26:13.495 } 00:26:13.495 } 00:26:13.495 ] 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "subsystem": "vmd", 00:26:13.495 "config": [] 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "subsystem": "accel", 00:26:13.495 "config": [ 00:26:13.495 { 00:26:13.495 "method": "accel_set_options", 00:26:13.495 "params": { 00:26:13.495 "small_cache_size": 128, 00:26:13.495 "large_cache_size": 16, 00:26:13.495 "task_count": 2048, 00:26:13.495 "sequence_count": 2048, 00:26:13.495 "buf_count": 2048 00:26:13.495 } 00:26:13.495 } 00:26:13.495 ] 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "subsystem": "bdev", 00:26:13.495 "config": [ 00:26:13.495 { 00:26:13.495 "method": "bdev_set_options", 00:26:13.495 "params": { 00:26:13.495 "bdev_io_pool_size": 65535, 00:26:13.495 "bdev_io_cache_size": 256, 00:26:13.495 "bdev_auto_examine": true, 00:26:13.495 "iobuf_small_cache_size": 128, 00:26:13.495 "iobuf_large_cache_size": 16 00:26:13.495 } 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "method": "bdev_raid_set_options", 00:26:13.495 "params": { 00:26:13.495 "process_window_size_kb": 1024 00:26:13.495 } 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "method": "bdev_iscsi_set_options", 00:26:13.495 "params": { 00:26:13.495 "timeout_sec": 30 00:26:13.495 } 00:26:13.495 }, 00:26:13.495 { 00:26:13.495 "method": "bdev_nvme_set_options", 00:26:13.495 "params": { 00:26:13.495 "action_on_timeout": "none", 00:26:13.495 "timeout_us": 0, 00:26:13.495 "timeout_admin_us": 0, 00:26:13.495 "keep_alive_timeout_ms": 10000, 00:26:13.495 "arbitration_burst": 0, 00:26:13.495 "low_priority_weight": 0, 00:26:13.495 "medium_priority_weight": 0, 00:26:13.495 "high_priority_weight": 0, 00:26:13.495 "nvme_adminq_poll_period_us": 10000, 00:26:13.495 "nvme_ioq_poll_period_us": 0, 00:26:13.495 "io_queue_requests": 512, 00:26:13.495 "delay_cmd_submit": true, 00:26:13.495 "transport_retry_count": 4, 00:26:13.495 "bdev_retry_count": 3, 00:26:13.495 "transport_ack_timeout": 0, 00:26:13.495 "ctrlr_loss_timeout_sec": 0, 00:26:13.495 "reconnect_delay_sec": 0, 00:26:13.495 "fast_io_fail_timeout_sec": 0, 00:26:13.495 "disable_auto_failback": false, 00:26:13.495 "generate_uuids": false, 00:26:13.495 "transport_tos": 0, 00:26:13.495 "nvme_error_stat": false, 00:26:13.495 "rdma_srq_size": 0, 00:26:13.495 "io_path_stat": false, 00:26:13.495 "allow_accel_sequence": false, 00:26:13.495 "rdma_max_cq_size": 0, 00:26:13.495 "rdma_cm_event_timeout_ms": 0, 00:26:13.495 "dhchap_digests": [ 00:26:13.495 "sha256", 00:26:13.495 "sha384", 00:26:13.495 "sha512" 00:26:13.495 ], 00:26:13.495 "dhchap_dhgroups": [ 00:26:13.495 "null", 00:26:13.496 "ffdhe2048", 00:26:13.496 "ffdhe3072", 00:26:13.496 "ffdhe4096", 00:26:13.496 "ffdhe6144", 00:26:13.496 "ffdhe8192" 00:26:13.496 ] 00:26:13.496 } 00:26:13.496 }, 00:26:13.496 { 00:26:13.496 "method": "bdev_nvme_attach_controller", 00:26:13.496 "params": { 00:26:13.496 "name": "nvme0", 00:26:13.496 "trtype": "TCP", 00:26:13.496 "adrfam": "IPv4", 00:26:13.496 "traddr": "127.0.0.1", 00:26:13.496 "trsvcid": "4420", 00:26:13.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.496 "prchk_reftag": false, 00:26:13.496 "prchk_guard": false, 00:26:13.496 "ctrlr_loss_timeout_sec": 0, 00:26:13.496 "reconnect_delay_sec": 0, 00:26:13.496 "fast_io_fail_timeout_sec": 0, 00:26:13.496 "psk": "key0", 00:26:13.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.496 "hdgst": false, 00:26:13.496 "ddgst": false 00:26:13.496 } 00:26:13.496 }, 00:26:13.496 { 00:26:13.496 "method": "bdev_nvme_set_hotplug", 00:26:13.496 "params": { 00:26:13.496 "period_us": 100000, 00:26:13.496 "enable": false 00:26:13.496 } 00:26:13.496 }, 00:26:13.496 { 00:26:13.496 "method": "bdev_wait_for_examine" 00:26:13.496 } 00:26:13.496 ] 00:26:13.496 }, 00:26:13.496 { 00:26:13.496 "subsystem": "nbd", 00:26:13.496 "config": [] 00:26:13.496 } 00:26:13.496 ] 00:26:13.496 }' 00:26:13.496 11:06:29 keyring_file -- keyring/file.sh@114 -- # killprocess 2931399 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2931399 ']' 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2931399 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@951 -- # uname 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2931399 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2931399' 00:26:13.496 killing process with pid 2931399 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@965 -- # kill 2931399 00:26:13.496 Received shutdown signal, test time was about 1.000000 seconds 00:26:13.496 00:26:13.496 Latency(us) 00:26:13.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:13.496 =================================================================================================================== 00:26:13.496 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:13.496 11:06:29 keyring_file -- common/autotest_common.sh@970 -- # wait 2931399 00:26:13.754 11:06:29 keyring_file -- keyring/file.sh@117 -- # bperfpid=2932863 00:26:13.754 11:06:29 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2932863 /var/tmp/bperf.sock 00:26:13.754 11:06:29 keyring_file -- common/autotest_common.sh@827 -- # '[' -z 2932863 ']' 00:26:13.754 11:06:29 keyring_file -- common/autotest_common.sh@831 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.754 11:06:29 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:26:13.754 11:06:29 keyring_file -- common/autotest_common.sh@832 -- # local max_retries=100 00:26:13.754 11:06:29 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:26:13.754 "subsystems": [ 00:26:13.754 { 00:26:13.754 "subsystem": "keyring", 00:26:13.754 "config": [ 00:26:13.754 { 00:26:13.754 "method": "keyring_file_add_key", 00:26:13.754 "params": { 00:26:13.754 "name": "key0", 00:26:13.754 "path": "/tmp/tmp.FFsGU40fci" 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "keyring_file_add_key", 00:26:13.755 "params": { 00:26:13.755 "name": "key1", 00:26:13.755 "path": "/tmp/tmp.ZkUkhqdfs1" 00:26:13.755 } 00:26:13.755 } 00:26:13.755 ] 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "subsystem": "iobuf", 00:26:13.755 "config": [ 00:26:13.755 { 00:26:13.755 "method": "iobuf_set_options", 00:26:13.755 "params": { 00:26:13.755 "small_pool_count": 8192, 00:26:13.755 "large_pool_count": 1024, 00:26:13.755 "small_bufsize": 8192, 00:26:13.755 "large_bufsize": 135168 00:26:13.755 } 00:26:13.755 } 00:26:13.755 ] 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "subsystem": "sock", 00:26:13.755 "config": [ 00:26:13.755 { 00:26:13.755 "method": "sock_set_default_impl", 00:26:13.755 "params": { 00:26:13.755 "impl_name": "posix" 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "sock_impl_set_options", 00:26:13.755 "params": { 00:26:13.755 "impl_name": "ssl", 00:26:13.755 "recv_buf_size": 4096, 00:26:13.755 "send_buf_size": 4096, 00:26:13.755 "enable_recv_pipe": true, 00:26:13.755 "enable_quickack": false, 00:26:13.755 "enable_placement_id": 0, 00:26:13.755 "enable_zerocopy_send_server": true, 00:26:13.755 "enable_zerocopy_send_client": false, 00:26:13.755 "zerocopy_threshold": 0, 00:26:13.755 "tls_version": 0, 00:26:13.755 "enable_ktls": false 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "sock_impl_set_options", 00:26:13.755 "params": { 00:26:13.755 "impl_name": "posix", 00:26:13.755 "recv_buf_size": 2097152, 00:26:13.755 "send_buf_size": 2097152, 00:26:13.755 "enable_recv_pipe": true, 00:26:13.755 "enable_quickack": false, 00:26:13.755 "enable_placement_id": 0, 00:26:13.755 "enable_zerocopy_send_server": true, 00:26:13.755 "enable_zerocopy_send_client": false, 00:26:13.755 "zerocopy_threshold": 0, 00:26:13.755 "tls_version": 0, 00:26:13.755 "enable_ktls": false 00:26:13.755 } 00:26:13.755 } 00:26:13.755 ] 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "subsystem": "vmd", 00:26:13.755 "config": [] 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "subsystem": "accel", 00:26:13.755 "config": [ 00:26:13.755 { 00:26:13.755 "method": "accel_set_options", 00:26:13.755 "params": { 00:26:13.755 "small_cache_size": 128, 00:26:13.755 "large_cache_size": 16, 00:26:13.755 "task_count": 2048, 00:26:13.755 "sequence_count": 2048, 00:26:13.755 "buf_count": 2048 00:26:13.755 } 00:26:13.755 } 00:26:13.755 ] 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "subsystem": "bdev", 00:26:13.755 "config": [ 00:26:13.755 { 00:26:13.755 "method": "bdev_set_options", 00:26:13.755 "params": { 00:26:13.755 "bdev_io_pool_size": 65535, 00:26:13.755 "bdev_io_cache_size": 256, 00:26:13.755 "bdev_auto_examine": true, 00:26:13.755 "iobuf_small_cache_size": 128, 00:26:13.755 "iobuf_large_cache_size": 16 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "bdev_raid_set_options", 00:26:13.755 "params": { 00:26:13.755 "process_window_size_kb": 1024 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "bdev_iscsi_set_options", 00:26:13.755 "params": { 00:26:13.755 "timeout_sec": 30 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "bdev_nvme_set_options", 00:26:13.755 "params": { 00:26:13.755 "action_on_timeout": "none", 00:26:13.755 "timeout_us": 0, 00:26:13.755 "timeout_admin_us": 0, 00:26:13.755 "keep_alive_timeout_ms": 10000, 00:26:13.755 "arbitration_burst": 0, 00:26:13.755 "low_priority_weight": 0, 00:26:13.755 "medium_priority_weight": 0, 00:26:13.755 "high_priority_weight": 0, 00:26:13.755 "nvme_adminq_poll_period_us": 10000, 00:26:13.755 "nvme_ioq_poll_period_us": 0, 00:26:13.755 "io_queue_requests": 512, 00:26:13.755 "delay_cmd_submit": true, 00:26:13.755 "transport_retry_count": 4, 00:26:13.755 "bdev_retry_count": 3, 00:26:13.755 "transport_ack_timeout": 0, 00:26:13.755 "ctrlr_loss_timeout_sec": 0, 00:26:13.755 "reconnect_delay_sec": 0, 00:26:13.755 "fast_io_fail_timeout_sec": 0, 00:26:13.755 "disable_auto_failback": false, 00:26:13.755 "generate_uuids": false, 00:26:13.755 "transport_tos": 0, 00:26:13.755 "nvme_error_stat": false, 00:26:13.755 "rdma_srq_size": 0, 00:26:13.755 "io_path_stat": false, 00:26:13.755 "allow_accel_sequence": false, 00:26:13.755 "rdma_max_cq_size": 0, 00:26:13.755 "rdma_cm_event_timeout_ms": 0, 00:26:13.755 "dhchap_digests": [ 00:26:13.755 "sha256", 00:26:13.755 "sha384", 00:26:13.755 "sha512" 00:26:13.755 ], 00:26:13.755 "dhchap_dhgroups": [ 00:26:13.755 "null", 00:26:13.755 "ffdhe2048", 00:26:13.755 "ffdhe3072", 00:26:13.755 "ffdhe4096", 00:26:13.755 "ffdhe6144", 00:26:13.755 "ffdhe8192" 00:26:13.755 ] 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "bdev_nvme_attach_controller", 00:26:13.755 "params": { 00:26:13.755 "name": "nvme0", 00:26:13.755 "trtype": "TCP", 00:26:13.755 "adrfam": "IPv4", 00:26:13.755 "traddr": "127.0.0.1", 00:26:13.755 "trsvcid": "4420", 00:26:13.755 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:13.755 "prchk_reftag": false, 00:26:13.755 "prchk_guard": false, 00:26:13.755 "ctrlr_loss_timeout_sec": 0, 00:26:13.755 "reconnect_delay_sec": 0, 00:26:13.755 "fast_io_fail_timeout_sec": 0, 00:26:13.755 "psk": "key0", 00:26:13.755 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:13.755 "hdgst": false, 00:26:13.755 "ddgst": false 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "bdev_nvme_set_hotplug", 00:26:13.755 "params": { 00:26:13.755 "period_us": 100000, 00:26:13.755 "enable": false 00:26:13.755 } 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "method": "bdev_wait_for_examine" 00:26:13.755 } 00:26:13.755 ] 00:26:13.755 }, 00:26:13.755 { 00:26:13.755 "subsystem": "nbd", 00:26:13.755 "config": [] 00:26:13.755 } 00:26:13.755 ] 00:26:13.755 }' 00:26:13.755 11:06:29 keyring_file -- common/autotest_common.sh@834 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.755 11:06:29 keyring_file -- common/autotest_common.sh@836 -- # xtrace_disable 00:26:13.755 11:06:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:14.014 [2024-05-15 11:06:30.001218] Starting SPDK v24.05-pre git sha1 08ee631f2 / DPDK 23.11.0 initialization... 00:26:14.014 [2024-05-15 11:06:30.001325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2932863 ] 00:26:14.014 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.014 [2024-05-15 11:06:30.074806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.014 [2024-05-15 11:06:30.189258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.277 [2024-05-15 11:06:30.384241] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:14.844 11:06:30 keyring_file -- common/autotest_common.sh@856 -- # (( i == 0 )) 00:26:14.844 11:06:30 keyring_file -- common/autotest_common.sh@860 -- # return 0 00:26:14.844 11:06:30 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:26:14.844 11:06:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:14.844 11:06:30 keyring_file -- keyring/file.sh@120 -- # jq length 00:26:15.102 11:06:31 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:26:15.102 11:06:31 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:26:15.102 11:06:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:26:15.102 11:06:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:15.102 11:06:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:15.102 11:06:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:15.102 11:06:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:26:15.360 11:06:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:26:15.360 11:06:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:26:15.360 11:06:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:26:15.360 11:06:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:26:15.360 11:06:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:26:15.360 11:06:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:26:15.360 11:06:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:26:15.617 11:06:31 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:26:15.617 11:06:31 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:26:15.617 11:06:31 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:26:15.618 11:06:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:26:15.875 11:06:31 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:26:15.875 11:06:31 keyring_file -- keyring/file.sh@1 -- # cleanup 00:26:15.875 11:06:31 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.FFsGU40fci /tmp/tmp.ZkUkhqdfs1 00:26:15.875 11:06:31 keyring_file -- keyring/file.sh@20 -- # killprocess 2932863 00:26:15.875 11:06:31 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2932863 ']' 00:26:15.875 11:06:31 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2932863 00:26:15.875 11:06:31 keyring_file -- common/autotest_common.sh@951 -- # uname 00:26:15.875 11:06:31 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:15.875 11:06:31 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2932863 00:26:15.875 11:06:31 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_1 00:26:15.876 11:06:31 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_1 = sudo ']' 00:26:15.876 11:06:31 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2932863' 00:26:15.876 killing process with pid 2932863 00:26:15.876 11:06:31 keyring_file -- common/autotest_common.sh@965 -- # kill 2932863 00:26:15.876 Received shutdown signal, test time was about 1.000000 seconds 00:26:15.876 00:26:15.876 Latency(us) 00:26:15.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:15.876 =================================================================================================================== 00:26:15.876 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:26:15.876 11:06:31 keyring_file -- common/autotest_common.sh@970 -- # wait 2932863 00:26:16.134 11:06:32 keyring_file -- keyring/file.sh@21 -- # killprocess 2931255 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@946 -- # '[' -z 2931255 ']' 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@950 -- # kill -0 2931255 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@951 -- # uname 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@951 -- # '[' Linux = Linux ']' 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@952 -- # ps --no-headers -o comm= 2931255 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@952 -- # process_name=reactor_0 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@956 -- # '[' reactor_0 = sudo ']' 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@964 -- # echo 'killing process with pid 2931255' 00:26:16.134 killing process with pid 2931255 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@965 -- # kill 2931255 00:26:16.134 [2024-05-15 11:06:32.280796] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:16.134 [2024-05-15 11:06:32.280867] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:16.134 11:06:32 keyring_file -- common/autotest_common.sh@970 -- # wait 2931255 00:26:16.701 00:26:16.701 real 0m15.417s 00:26:16.701 user 0m37.086s 00:26:16.701 sys 0m3.296s 00:26:16.701 11:06:32 keyring_file -- common/autotest_common.sh@1122 -- # xtrace_disable 00:26:16.701 11:06:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:26:16.701 ************************************ 00:26:16.701 END TEST keyring_file 00:26:16.701 ************************************ 00:26:16.701 11:06:32 -- spdk/autotest.sh@305 -- # [[ n == y ]] 00:26:16.701 11:06:32 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@344 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@361 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@365 -- # '[' 0 -eq 1 ']' 00:26:16.701 11:06:32 -- spdk/autotest.sh@372 -- # [[ 0 -eq 1 ]] 00:26:16.701 11:06:32 -- spdk/autotest.sh@376 -- # [[ 0 -eq 1 ]] 00:26:16.701 11:06:32 -- spdk/autotest.sh@380 -- # [[ 0 -eq 1 ]] 00:26:16.701 11:06:32 -- spdk/autotest.sh@384 -- # [[ 0 -eq 1 ]] 00:26:16.701 11:06:32 -- spdk/autotest.sh@389 -- # trap - SIGINT SIGTERM EXIT 00:26:16.701 11:06:32 -- spdk/autotest.sh@391 -- # timing_enter post_cleanup 00:26:16.701 11:06:32 -- common/autotest_common.sh@720 -- # xtrace_disable 00:26:16.701 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:26:16.701 11:06:32 -- spdk/autotest.sh@392 -- # autotest_cleanup 00:26:16.701 11:06:32 -- common/autotest_common.sh@1388 -- # local autotest_es=0 00:26:16.701 11:06:32 -- common/autotest_common.sh@1389 -- # xtrace_disable 00:26:16.701 11:06:32 -- common/autotest_common.sh@10 -- # set +x 00:26:18.633 INFO: APP EXITING 00:26:18.633 INFO: killing all VMs 00:26:18.633 INFO: killing vhost app 00:26:18.633 INFO: EXIT DONE 00:26:20.009 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:26:20.009 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:26:20.009 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:26:20.009 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:26:20.009 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:26:20.009 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:26:20.009 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:26:20.009 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:26:20.009 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:26:20.009 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:26:20.009 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:26:20.009 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:26:20.009 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:26:20.009 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:26:20.009 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:26:20.009 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:26:20.009 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:26:21.384 Cleaning 00:26:21.384 Removing: /var/run/dpdk/spdk0/config 00:26:21.384 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:21.384 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:21.384 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:26:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:26:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:26:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:26:21.385 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:21.385 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:21.385 Removing: /var/run/dpdk/spdk1/config 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:26:21.385 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:21.385 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:21.385 Removing: /var/run/dpdk/spdk1/mp_socket 00:26:21.385 Removing: /var/run/dpdk/spdk2/config 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:26:21.385 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:21.385 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:21.385 Removing: /var/run/dpdk/spdk3/config 00:26:21.385 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:21.385 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:21.385 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:21.385 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:21.385 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:26:21.385 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:26:21.385 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:26:21.643 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:26:21.643 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:21.643 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:21.643 Removing: /var/run/dpdk/spdk4/config 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:26:21.643 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:21.643 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:21.643 Removing: /dev/shm/bdev_svc_trace.1 00:26:21.643 Removing: /dev/shm/nvmf_trace.0 00:26:21.643 Removing: /dev/shm/spdk_tgt_trace.pid2667176 00:26:21.643 Removing: /var/run/dpdk/spdk0 00:26:21.643 Removing: /var/run/dpdk/spdk1 00:26:21.643 Removing: /var/run/dpdk/spdk2 00:26:21.643 Removing: /var/run/dpdk/spdk3 00:26:21.643 Removing: /var/run/dpdk/spdk4 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2665568 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2666308 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2667176 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2667554 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2668250 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2668512 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2669230 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2669240 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2669484 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2670800 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2671785 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2672031 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2672216 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2672553 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2672741 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2672901 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2673059 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2673359 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2673817 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2676166 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2676331 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2676621 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2676629 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2677058 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2677194 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2677529 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2677636 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2677798 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2677936 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2678108 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2678236 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2678720 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2678875 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2679186 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2679360 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2679392 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2679821 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2680236 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2680399 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2680665 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2680834 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2680985 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2681261 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2681422 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2681615 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2681850 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2682017 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2682283 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2682446 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2682609 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2682877 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2683040 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2683198 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2683473 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2683637 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2683866 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2684070 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2684263 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2684482 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2686955 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2715973 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2718921 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2726894 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2730493 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2733520 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2733933 00:26:21.643 Removing: /var/run/dpdk/spdk_pid2742014 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2742016 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2742678 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2743217 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2743876 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2744272 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2744280 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2744531 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2744553 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2744630 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2745220 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2745872 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2746529 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2746926 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2746938 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2747077 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2748092 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2748815 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2755219 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2755491 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2758418 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2762532 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2764597 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2771821 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2777849 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2779047 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2779717 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2791530 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2794036 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2797241 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2798434 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2799753 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2799779 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2799910 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2800044 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2800480 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2801803 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2802539 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2802968 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2804583 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2805025 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2805712 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2808523 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2815264 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2817921 00:26:21.901 Removing: /var/run/dpdk/spdk_pid2822273 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2823452 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2825030 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2827961 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2830650 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2835698 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2835708 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2838904 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2839161 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2839294 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2839576 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2839692 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2842611 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2842944 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2845906 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2847889 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2851597 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2855455 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2862117 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2867553 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2867593 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2880630 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2881170 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2881703 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2882247 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2882826 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2883238 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2883774 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2884234 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2887101 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2887361 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2891534 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2891623 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2896885 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2902663 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2902682 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2906069 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2907477 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2908881 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2909695 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2911026 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2911911 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2921333 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2921725 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2922116 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2923655 00:26:21.902 Removing: /var/run/dpdk/spdk_pid2924052 00:26:22.160 Removing: /var/run/dpdk/spdk_pid2924452 00:26:22.160 Removing: /var/run/dpdk/spdk_pid2931255 00:26:22.160 Removing: /var/run/dpdk/spdk_pid2931399 00:26:22.160 Removing: /var/run/dpdk/spdk_pid2932863 00:26:22.160 Clean 00:26:22.160 11:06:38 -- common/autotest_common.sh@1447 -- # return 0 00:26:22.160 11:06:38 -- spdk/autotest.sh@393 -- # timing_exit post_cleanup 00:26:22.160 11:06:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:22.160 11:06:38 -- common/autotest_common.sh@10 -- # set +x 00:26:22.160 11:06:38 -- spdk/autotest.sh@395 -- # timing_exit autotest 00:26:22.160 11:06:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:22.160 11:06:38 -- common/autotest_common.sh@10 -- # set +x 00:26:22.160 11:06:38 -- spdk/autotest.sh@396 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:26:22.160 11:06:38 -- spdk/autotest.sh@398 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:26:22.160 11:06:38 -- spdk/autotest.sh@398 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:26:22.160 11:06:38 -- spdk/autotest.sh@400 -- # hash lcov 00:26:22.160 11:06:38 -- spdk/autotest.sh@400 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:22.160 11:06:38 -- spdk/autotest.sh@402 -- # hostname 00:26:22.160 11:06:38 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:26:22.418 geninfo: WARNING: invalid characters removed from testname! 00:26:54.490 11:07:05 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:54.490 11:07:09 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:26:57.769 11:07:13 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:00.309 11:07:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:02.913 11:07:19 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:06.210 11:07:21 -- spdk/autotest.sh@408 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:27:08.750 11:07:24 -- spdk/autotest.sh@409 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:08.750 11:07:24 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:08.750 11:07:24 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:08.750 11:07:24 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:08.750 11:07:24 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:08.750 11:07:24 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.750 11:07:24 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.750 11:07:24 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.750 11:07:24 -- paths/export.sh@5 -- $ export PATH 00:27:08.750 11:07:24 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:08.750 11:07:24 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:27:08.750 11:07:24 -- common/autobuild_common.sh@437 -- $ date +%s 00:27:08.750 11:07:24 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715764044.XXXXXX 00:27:08.750 11:07:24 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715764044.RmhjXd 00:27:08.750 11:07:24 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:27:08.750 11:07:24 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:27:08.750 11:07:24 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:27:08.750 11:07:24 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:27:08.750 11:07:24 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:27:08.750 11:07:24 -- common/autobuild_common.sh@453 -- $ get_config_params 00:27:08.750 11:07:24 -- common/autotest_common.sh@395 -- $ xtrace_disable 00:27:08.750 11:07:24 -- common/autotest_common.sh@10 -- $ set +x 00:27:08.750 11:07:24 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:27:08.750 11:07:24 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:27:08.750 11:07:24 -- pm/common@17 -- $ local monitor 00:27:08.750 11:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:08.750 11:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:08.750 11:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:08.750 11:07:24 -- pm/common@21 -- $ date +%s 00:27:08.750 11:07:24 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:08.750 11:07:24 -- pm/common@21 -- $ date +%s 00:27:08.750 11:07:24 -- pm/common@25 -- $ sleep 1 00:27:08.750 11:07:24 -- pm/common@21 -- $ date +%s 00:27:08.750 11:07:24 -- pm/common@21 -- $ date +%s 00:27:08.750 11:07:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764044 00:27:08.750 11:07:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764044 00:27:08.750 11:07:24 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764044 00:27:08.750 11:07:24 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715764044 00:27:08.750 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764044_collect-vmstat.pm.log 00:27:08.750 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764044_collect-cpu-load.pm.log 00:27:08.750 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764044_collect-cpu-temp.pm.log 00:27:08.750 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715764044_collect-bmc-pm.bmc.pm.log 00:27:09.686 11:07:25 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:27:09.686 11:07:25 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:27:09.686 11:07:25 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:09.686 11:07:25 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:09.686 11:07:25 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:09.686 11:07:25 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:09.686 11:07:25 -- common/autotest_common.sh@732 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:09.686 11:07:25 -- common/autotest_common.sh@733 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:09.686 11:07:25 -- common/autotest_common.sh@735 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:09.686 11:07:25 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:09.686 11:07:25 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:27:09.686 11:07:25 -- pm/common@29 -- $ signal_monitor_resources TERM 00:27:09.686 11:07:25 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:27:09.686 11:07:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:09.686 11:07:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:27:09.686 11:07:25 -- pm/common@44 -- $ pid=2942388 00:27:09.686 11:07:25 -- pm/common@50 -- $ kill -TERM 2942388 00:27:09.686 11:07:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:09.686 11:07:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:27:09.686 11:07:25 -- pm/common@44 -- $ pid=2942390 00:27:09.686 11:07:25 -- pm/common@50 -- $ kill -TERM 2942390 00:27:09.686 11:07:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:09.686 11:07:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:27:09.686 11:07:25 -- pm/common@44 -- $ pid=2942392 00:27:09.686 11:07:25 -- pm/common@50 -- $ kill -TERM 2942392 00:27:09.686 11:07:25 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:27:09.686 11:07:25 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:27:09.686 11:07:25 -- pm/common@44 -- $ pid=2942428 00:27:09.686 11:07:25 -- pm/common@50 -- $ sudo -E kill -TERM 2942428 00:27:09.945 + [[ -n 2579651 ]] 00:27:09.945 + sudo kill 2579651 00:27:09.955 [Pipeline] } 00:27:09.973 [Pipeline] // stage 00:27:09.978 [Pipeline] } 00:27:09.995 [Pipeline] // timeout 00:27:10.001 [Pipeline] } 00:27:10.019 [Pipeline] // catchError 00:27:10.023 [Pipeline] } 00:27:10.035 [Pipeline] // wrap 00:27:10.040 [Pipeline] } 00:27:10.055 [Pipeline] // catchError 00:27:10.062 [Pipeline] stage 00:27:10.064 [Pipeline] { (Epilogue) 00:27:10.076 [Pipeline] catchError 00:27:10.077 [Pipeline] { 00:27:10.091 [Pipeline] echo 00:27:10.093 Cleanup processes 00:27:10.098 [Pipeline] sh 00:27:10.381 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:10.381 2942536 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:27:10.381 2942652 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:10.396 [Pipeline] sh 00:27:10.679 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:27:10.679 ++ grep -v 'sudo pgrep' 00:27:10.679 ++ awk '{print $1}' 00:27:10.679 + sudo kill -9 2942536 00:27:10.690 [Pipeline] sh 00:27:10.968 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:20.978 [Pipeline] sh 00:27:21.259 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:21.259 Artifacts sizes are good 00:27:21.276 [Pipeline] archiveArtifacts 00:27:21.285 Archiving artifacts 00:27:21.468 [Pipeline] sh 00:27:21.748 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:27:21.767 [Pipeline] cleanWs 00:27:21.779 [WS-CLEANUP] Deleting project workspace... 00:27:21.779 [WS-CLEANUP] Deferred wipeout is used... 00:27:21.785 [WS-CLEANUP] done 00:27:21.787 [Pipeline] } 00:27:21.810 [Pipeline] // catchError 00:27:21.822 [Pipeline] sh 00:27:22.101 + logger -p user.info -t JENKINS-CI 00:27:22.110 [Pipeline] } 00:27:22.127 [Pipeline] // stage 00:27:22.132 [Pipeline] } 00:27:22.151 [Pipeline] // node 00:27:22.156 [Pipeline] End of Pipeline 00:27:22.189 Finished: SUCCESS